00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2382 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3647 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.226 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.226 The recommended git tool is: git 00:00:00.226 using credential 00000000-0000-0000-0000-000000000002 00:00:00.228 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.242 Fetching changes from the remote Git repository 00:00:00.247 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.269 Using shallow fetch with depth 1 00:00:00.269 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.269 > git --version # timeout=10 00:00:00.284 > git --version # 'git version 2.39.2' 00:00:00.284 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.299 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.299 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.039 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.051 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.063 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.063 > git config core.sparsecheckout # timeout=10 00:00:08.075 > git read-tree -mu HEAD # timeout=10 00:00:08.091 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.115 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.115 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.212 [Pipeline] Start of Pipeline 00:00:08.224 [Pipeline] library 00:00:08.225 Loading library shm_lib@master 00:00:08.225 Library shm_lib@master is cached. Copying from home. 00:00:08.240 [Pipeline] node 00:00:08.248 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.250 [Pipeline] { 00:00:08.260 [Pipeline] catchError 00:00:08.261 [Pipeline] { 00:00:08.273 [Pipeline] wrap 00:00:08.281 [Pipeline] { 00:00:08.288 [Pipeline] stage 00:00:08.289 [Pipeline] { (Prologue) 00:00:08.478 [Pipeline] sh 00:00:08.759 + logger -p user.info -t JENKINS-CI 00:00:08.779 [Pipeline] echo 00:00:08.781 Node: GP11 00:00:08.790 [Pipeline] sh 00:00:09.085 [Pipeline] setCustomBuildProperty 00:00:09.100 [Pipeline] echo 00:00:09.102 Cleanup processes 00:00:09.109 [Pipeline] sh 00:00:09.392 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.392 4138006 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.404 [Pipeline] sh 00:00:09.684 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.684 ++ grep -v 'sudo pgrep' 00:00:09.684 ++ awk '{print $1}' 00:00:09.684 + sudo kill -9 00:00:09.684 + true 00:00:09.697 [Pipeline] cleanWs 00:00:09.707 [WS-CLEANUP] Deleting project workspace... 00:00:09.707 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.713 [WS-CLEANUP] done 00:00:09.717 [Pipeline] setCustomBuildProperty 00:00:09.734 [Pipeline] sh 00:00:10.017 + sudo git config --global --replace-all safe.directory '*' 00:00:10.113 [Pipeline] httpRequest 00:00:10.420 [Pipeline] echo 00:00:10.423 Sorcerer 10.211.164.20 is alive 00:00:10.433 [Pipeline] retry 00:00:10.435 [Pipeline] { 00:00:10.450 [Pipeline] httpRequest 00:00:10.454 HttpMethod: GET 00:00:10.455 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.455 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.475 Response Code: HTTP/1.1 200 OK 00:00:10.476 Success: Status code 200 is in the accepted range: 200,404 00:00:10.476 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.470 [Pipeline] } 00:00:15.486 [Pipeline] // retry 00:00:15.494 [Pipeline] sh 00:00:15.774 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.790 [Pipeline] httpRequest 00:00:16.193 [Pipeline] echo 00:00:16.195 Sorcerer 10.211.164.20 is alive 00:00:16.205 [Pipeline] retry 00:00:16.207 [Pipeline] { 00:00:16.221 [Pipeline] httpRequest 00:00:16.225 HttpMethod: GET 00:00:16.225 URL: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:16.226 Sending request to url: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:16.235 Response Code: HTTP/1.1 200 OK 00:00:16.235 Success: Status code 200 is in the accepted range: 200,404 00:00:16.235 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:01:34.205 [Pipeline] } 00:01:34.222 [Pipeline] // retry 00:01:34.230 [Pipeline] sh 00:01:34.510 + tar --no-same-owner -xf spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:01:37.077 [Pipeline] sh 00:01:37.373 + git -C spdk log --oneline -n5 00:01:37.373 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:01:37.373 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:01:37.373 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:01:37.373 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:01:37.373 029355612 bdev_ut: add manual examine bdev unit test case 00:01:37.393 [Pipeline] withCredentials 00:01:37.402 > git --version # timeout=10 00:01:37.415 > git --version # 'git version 2.39.2' 00:01:37.429 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:37.431 [Pipeline] { 00:01:37.440 [Pipeline] retry 00:01:37.442 [Pipeline] { 00:01:37.456 [Pipeline] sh 00:01:37.733 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:38.309 [Pipeline] } 00:01:38.329 [Pipeline] // retry 00:01:38.335 [Pipeline] } 00:01:38.353 [Pipeline] // withCredentials 00:01:38.364 [Pipeline] httpRequest 00:01:38.758 [Pipeline] echo 00:01:38.760 Sorcerer 10.211.164.20 is alive 00:01:38.769 [Pipeline] retry 00:01:38.771 [Pipeline] { 00:01:38.789 [Pipeline] httpRequest 00:01:38.794 HttpMethod: GET 00:01:38.794 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:38.794 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:38.797 Response Code: HTTP/1.1 200 OK 00:01:38.797 Success: Status code 200 is in the accepted range: 200,404 00:01:38.797 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:43.996 [Pipeline] } 00:01:44.016 [Pipeline] // retry 00:01:44.024 [Pipeline] sh 00:01:44.304 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:46.214 [Pipeline] sh 00:01:46.494 + git -C dpdk log --oneline -n5 00:01:46.494 caf0f5d395 version: 22.11.4 00:01:46.494 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:46.494 dc9c799c7d vhost: fix missing spinlock unlock 00:01:46.494 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:46.494 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:46.504 [Pipeline] } 00:01:46.519 [Pipeline] // stage 00:01:46.527 [Pipeline] stage 00:01:46.529 [Pipeline] { (Prepare) 00:01:46.547 [Pipeline] writeFile 00:01:46.563 [Pipeline] sh 00:01:46.841 + logger -p user.info -t JENKINS-CI 00:01:46.852 [Pipeline] sh 00:01:47.130 + logger -p user.info -t JENKINS-CI 00:01:47.141 [Pipeline] sh 00:01:47.418 + cat autorun-spdk.conf 00:01:47.418 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.418 SPDK_TEST_NVMF=1 00:01:47.418 SPDK_TEST_NVME_CLI=1 00:01:47.418 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.418 SPDK_TEST_NVMF_NICS=e810 00:01:47.418 SPDK_TEST_VFIOUSER=1 00:01:47.418 SPDK_RUN_UBSAN=1 00:01:47.418 NET_TYPE=phy 00:01:47.418 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:47.418 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.423 RUN_NIGHTLY=1 00:01:47.428 [Pipeline] readFile 00:01:47.453 [Pipeline] withEnv 00:01:47.455 [Pipeline] { 00:01:47.468 [Pipeline] sh 00:01:47.748 + set -ex 00:01:47.748 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:47.748 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.748 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.748 ++ SPDK_TEST_NVMF=1 00:01:47.748 ++ SPDK_TEST_NVME_CLI=1 00:01:47.748 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.748 ++ SPDK_TEST_NVMF_NICS=e810 00:01:47.748 ++ SPDK_TEST_VFIOUSER=1 00:01:47.748 ++ SPDK_RUN_UBSAN=1 00:01:47.748 ++ NET_TYPE=phy 00:01:47.748 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:47.748 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:47.748 ++ RUN_NIGHTLY=1 00:01:47.748 + case $SPDK_TEST_NVMF_NICS in 00:01:47.748 + DRIVERS=ice 00:01:47.748 + [[ tcp == \r\d\m\a ]] 00:01:47.748 + [[ -n ice ]] 00:01:47.748 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:47.748 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:47.748 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:47.748 rmmod: ERROR: Module irdma is not currently loaded 00:01:47.748 rmmod: ERROR: Module i40iw is not currently loaded 00:01:47.748 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:47.748 + true 00:01:47.748 + for D in $DRIVERS 00:01:47.748 + sudo modprobe ice 00:01:47.748 + exit 0 00:01:47.756 [Pipeline] } 00:01:47.771 [Pipeline] // withEnv 00:01:47.776 [Pipeline] } 00:01:47.789 [Pipeline] // stage 00:01:47.797 [Pipeline] catchError 00:01:47.799 [Pipeline] { 00:01:47.811 [Pipeline] timeout 00:01:47.811 Timeout set to expire in 1 hr 0 min 00:01:47.812 [Pipeline] { 00:01:47.825 [Pipeline] stage 00:01:47.827 [Pipeline] { (Tests) 00:01:47.838 [Pipeline] sh 00:01:48.116 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:48.116 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:48.116 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:48.116 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:48.116 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:48.116 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:48.116 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:48.116 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:48.116 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:48.116 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:48.116 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:48.116 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:48.116 + source /etc/os-release 00:01:48.116 ++ NAME='Fedora Linux' 00:01:48.116 ++ VERSION='39 (Cloud Edition)' 00:01:48.116 ++ ID=fedora 00:01:48.116 ++ VERSION_ID=39 00:01:48.116 ++ VERSION_CODENAME= 00:01:48.116 ++ PLATFORM_ID=platform:f39 00:01:48.116 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:48.116 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:48.116 ++ LOGO=fedora-logo-icon 00:01:48.116 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:48.116 ++ HOME_URL=https://fedoraproject.org/ 00:01:48.116 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:48.116 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:48.116 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:48.116 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:48.116 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:48.116 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:48.116 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:48.116 ++ SUPPORT_END=2024-11-12 00:01:48.116 ++ VARIANT='Cloud Edition' 00:01:48.116 ++ VARIANT_ID=cloud 00:01:48.116 + uname -a 00:01:48.116 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:48.116 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:49.049 Hugepages 00:01:49.049 node hugesize free / total 00:01:49.049 node0 1048576kB 0 / 0 00:01:49.049 node0 2048kB 0 / 0 00:01:49.049 node1 1048576kB 0 / 0 00:01:49.049 node1 2048kB 0 / 0 00:01:49.049 00:01:49.049 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:49.049 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:49.049 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:49.049 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:49.049 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:49.049 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:49.049 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:49.049 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:49.049 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:49.049 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:49.049 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:49.049 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:49.049 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:49.049 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:49.049 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:49.049 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:49.049 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:49.307 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:49.307 + rm -f /tmp/spdk-ld-path 00:01:49.307 + source autorun-spdk.conf 00:01:49.307 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.307 ++ SPDK_TEST_NVMF=1 00:01:49.307 ++ SPDK_TEST_NVME_CLI=1 00:01:49.307 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.307 ++ SPDK_TEST_NVMF_NICS=e810 00:01:49.307 ++ SPDK_TEST_VFIOUSER=1 00:01:49.307 ++ SPDK_RUN_UBSAN=1 00:01:49.307 ++ NET_TYPE=phy 00:01:49.307 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:49.307 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:49.307 ++ RUN_NIGHTLY=1 00:01:49.307 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:49.307 + [[ -n '' ]] 00:01:49.307 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.307 + for M in /var/spdk/build-*-manifest.txt 00:01:49.307 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:49.307 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.307 + for M in /var/spdk/build-*-manifest.txt 00:01:49.307 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:49.307 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.307 + for M in /var/spdk/build-*-manifest.txt 00:01:49.307 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:49.307 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.307 ++ uname 00:01:49.307 + [[ Linux == \L\i\n\u\x ]] 00:01:49.307 + sudo dmesg -T 00:01:49.307 + sudo dmesg --clear 00:01:49.307 + dmesg_pid=4138715 00:01:49.307 + [[ Fedora Linux == FreeBSD ]] 00:01:49.307 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.307 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.307 + sudo dmesg -Tw 00:01:49.308 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:49.308 + [[ -x /usr/src/fio-static/fio ]] 00:01:49.308 + export FIO_BIN=/usr/src/fio-static/fio 00:01:49.308 + FIO_BIN=/usr/src/fio-static/fio 00:01:49.308 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:49.308 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:49.308 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:49.308 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.308 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.308 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:49.308 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.308 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.308 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.308 23:26:23 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:49.308 23:26:23 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.308 23:26:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.308 23:26:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:49.308 23:26:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:49.308 23:26:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.308 23:26:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:49.308 23:26:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:49.308 23:26:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:49.308 23:26:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:49.308 23:26:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:49.308 23:26:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:49.308 23:26:23 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:49.308 23:26:23 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:49.308 23:26:23 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.308 23:26:23 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:49.308 23:26:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:49.308 23:26:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:49.308 23:26:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:49.308 23:26:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.308 23:26:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.308 23:26:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.308 23:26:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.308 23:26:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.308 23:26:23 -- paths/export.sh@5 -- $ export PATH 00:01:49.308 23:26:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.308 23:26:23 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:49.308 23:26:23 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:49.308 23:26:23 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732055183.XXXXXX 00:01:49.308 23:26:23 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732055183.odr4oc 00:01:49.308 23:26:23 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:49.308 23:26:23 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:01:49.308 23:26:23 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:49.308 23:26:23 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:49.308 23:26:23 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:49.308 23:26:23 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:49.308 23:26:23 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:49.308 23:26:23 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:49.308 23:26:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.308 23:26:23 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:49.308 23:26:23 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:49.308 23:26:23 -- pm/common@17 -- $ local monitor 00:01:49.308 23:26:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.308 23:26:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.308 23:26:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.308 23:26:23 -- pm/common@21 -- $ date +%s 00:01:49.308 23:26:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.308 23:26:23 -- pm/common@21 -- $ date +%s 00:01:49.308 23:26:23 -- pm/common@25 -- $ sleep 1 00:01:49.308 23:26:23 -- pm/common@21 -- $ date +%s 00:01:49.308 23:26:23 -- pm/common@21 -- $ date +%s 00:01:49.308 23:26:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732055183 00:01:49.308 23:26:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732055183 00:01:49.308 23:26:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732055183 00:01:49.308 23:26:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732055183 00:01:49.308 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732055183_collect-vmstat.pm.log 00:01:49.308 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732055183_collect-cpu-load.pm.log 00:01:49.308 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732055183_collect-cpu-temp.pm.log 00:01:49.308 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732055183_collect-bmc-pm.bmc.pm.log 00:01:50.683 23:26:24 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:50.683 23:26:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:50.683 23:26:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:50.683 23:26:24 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.683 23:26:24 -- spdk/autobuild.sh@16 -- $ date -u 00:01:50.683 Tue Nov 19 10:26:24 PM UTC 2024 00:01:50.683 23:26:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:50.683 v25.01-pre-199-gf22e807f1 00:01:50.683 23:26:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:50.683 23:26:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:50.683 23:26:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:50.683 23:26:24 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:50.683 23:26:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:50.683 23:26:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.683 ************************************ 00:01:50.683 START TEST ubsan 00:01:50.683 ************************************ 00:01:50.683 23:26:24 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:50.683 using ubsan 00:01:50.683 00:01:50.683 real 0m0.000s 00:01:50.683 user 0m0.000s 00:01:50.683 sys 0m0.000s 00:01:50.683 23:26:24 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:50.683 23:26:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:50.683 ************************************ 00:01:50.683 END TEST ubsan 00:01:50.683 ************************************ 00:01:50.683 23:26:24 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:50.683 23:26:24 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:50.683 23:26:24 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:50.683 23:26:24 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:50.683 23:26:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:50.683 23:26:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.683 ************************************ 00:01:50.683 START TEST build_native_dpdk 00:01:50.683 ************************************ 00:01:50.683 23:26:24 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:50.683 caf0f5d395 version: 22.11.4 00:01:50.683 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:50.683 dc9c799c7d vhost: fix missing spinlock unlock 00:01:50.683 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:50.683 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:01:50.683 patching file config/rte_config.h 00:01:50.683 Hunk #1 succeeded at 60 (offset 1 line). 00:01:50.683 23:26:24 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:50.683 23:26:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:50.684 23:26:24 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:01:50.684 patching file lib/pcapng/rte_pcapng.c 00:01:50.684 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:50.684 23:26:24 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:50.684 23:26:24 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:50.684 23:26:24 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:01:50.684 23:26:24 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:01:50.684 23:26:24 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:01:50.684 23:26:24 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:01:50.684 23:26:24 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:54.867 The Meson build system 00:01:54.867 Version: 1.5.0 00:01:54.867 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:54.867 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:54.867 Build type: native build 00:01:54.867 Program cat found: YES (/usr/bin/cat) 00:01:54.867 Project name: DPDK 00:01:54.867 Project version: 22.11.4 00:01:54.867 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:54.867 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:54.867 Host machine cpu family: x86_64 00:01:54.867 Host machine cpu: x86_64 00:01:54.867 Message: ## Building in Developer Mode ## 00:01:54.867 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.867 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:54.867 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.867 Program objdump found: YES (/usr/bin/objdump) 00:01:54.867 Program python3 found: YES (/usr/bin/python3) 00:01:54.867 Program cat found: YES (/usr/bin/cat) 00:01:54.867 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:54.867 Checking for size of "void *" : 8 00:01:54.867 Checking for size of "void *" : 8 (cached) 00:01:54.867 Library m found: YES 00:01:54.867 Library numa found: YES 00:01:54.867 Has header "numaif.h" : YES 00:01:54.867 Library fdt found: NO 00:01:54.867 Library execinfo found: NO 00:01:54.867 Has header "execinfo.h" : YES 00:01:54.867 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:54.867 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.867 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.867 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.867 Run-time dependency openssl found: YES 3.1.1 00:01:54.867 Run-time dependency libpcap found: YES 1.10.4 00:01:54.867 Has header "pcap.h" with dependency libpcap: YES 00:01:54.867 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.867 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.867 Compiler for C supports arguments -Wformat: YES 00:01:54.867 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.867 Compiler for C supports arguments -Wformat-security: NO 00:01:54.867 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.867 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.867 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.867 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.867 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.867 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.867 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.867 Compiler for C supports arguments -Wundef: YES 00:01:54.867 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.867 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.867 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.868 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.868 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.868 Compiler for C supports arguments -mavx512f: YES 00:01:54.868 Checking if "AVX512 checking" compiles: YES 00:01:54.868 Fetching value of define "__SSE4_2__" : 1 00:01:54.868 Fetching value of define "__AES__" : 1 00:01:54.868 Fetching value of define "__AVX__" : 1 00:01:54.868 Fetching value of define "__AVX2__" : (undefined) 00:01:54.868 Fetching value of define "__AVX512BW__" : (undefined) 00:01:54.868 Fetching value of define "__AVX512CD__" : (undefined) 00:01:54.868 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:54.868 Fetching value of define "__AVX512F__" : (undefined) 00:01:54.868 Fetching value of define "__AVX512VL__" : (undefined) 00:01:54.868 Fetching value of define "__PCLMUL__" : 1 00:01:54.868 Fetching value of define "__RDRND__" : 1 00:01:54.868 Fetching value of define "__RDSEED__" : (undefined) 00:01:54.868 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:54.868 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.868 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.868 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.868 Checking for function "getentropy" : YES 00:01:54.868 Message: lib/eal: Defining dependency "eal" 00:01:54.868 Message: lib/ring: Defining dependency "ring" 00:01:54.868 Message: lib/rcu: Defining dependency "rcu" 00:01:54.868 Message: lib/mempool: Defining dependency "mempool" 00:01:54.868 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.868 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.868 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:54.868 Compiler for C supports arguments -mpclmul: YES 00:01:54.868 Compiler for C supports arguments -maes: YES 00:01:54.868 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.868 Compiler for C supports arguments -mavx512bw: YES 00:01:54.868 Compiler for C supports arguments -mavx512dq: YES 00:01:54.868 Compiler for C supports arguments -mavx512vl: YES 00:01:54.868 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.868 Compiler for C supports arguments -mavx2: YES 00:01:54.868 Compiler for C supports arguments -mavx: YES 00:01:54.868 Message: lib/net: Defining dependency "net" 00:01:54.868 Message: lib/meter: Defining dependency "meter" 00:01:54.868 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.868 Message: lib/pci: Defining dependency "pci" 00:01:54.868 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.868 Message: lib/metrics: Defining dependency "metrics" 00:01:54.868 Message: lib/hash: Defining dependency "hash" 00:01:54.868 Message: lib/timer: Defining dependency "timer" 00:01:54.868 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:54.868 Compiler for C supports arguments -mavx2: YES (cached) 00:01:54.868 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:54.868 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:54.868 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:54.868 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:54.868 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:54.868 Message: lib/acl: Defining dependency "acl" 00:01:54.868 Message: lib/bbdev: Defining dependency "bbdev" 00:01:54.868 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:54.868 Run-time dependency libelf found: YES 0.191 00:01:54.868 Message: lib/bpf: Defining dependency "bpf" 00:01:54.868 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:54.868 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.868 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.868 Message: lib/distributor: Defining dependency "distributor" 00:01:54.868 Message: lib/efd: Defining dependency "efd" 00:01:54.868 Message: lib/eventdev: Defining dependency "eventdev" 00:01:54.868 Message: lib/gpudev: Defining dependency "gpudev" 00:01:54.868 Message: lib/gro: Defining dependency "gro" 00:01:54.868 Message: lib/gso: Defining dependency "gso" 00:01:54.868 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:54.868 Message: lib/jobstats: Defining dependency "jobstats" 00:01:54.868 Message: lib/latencystats: Defining dependency "latencystats" 00:01:54.868 Message: lib/lpm: Defining dependency "lpm" 00:01:54.868 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:54.868 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:54.868 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:54.868 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:54.868 Message: lib/member: Defining dependency "member" 00:01:54.868 Message: lib/pcapng: Defining dependency "pcapng" 00:01:54.868 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.868 Message: lib/power: Defining dependency "power" 00:01:54.868 Message: lib/rawdev: Defining dependency "rawdev" 00:01:54.868 Message: lib/regexdev: Defining dependency "regexdev" 00:01:54.868 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.868 Message: lib/rib: Defining dependency "rib" 00:01:54.868 Message: lib/reorder: Defining dependency "reorder" 00:01:54.868 Message: lib/sched: Defining dependency "sched" 00:01:54.868 Message: lib/security: Defining dependency "security" 00:01:54.868 Message: lib/stack: Defining dependency "stack" 00:01:54.868 Has header "linux/userfaultfd.h" : YES 00:01:54.868 Message: lib/vhost: Defining dependency "vhost" 00:01:54.868 Message: lib/ipsec: Defining dependency "ipsec" 00:01:54.868 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:54.868 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:54.868 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:54.868 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:54.868 Message: lib/fib: Defining dependency "fib" 00:01:54.868 Message: lib/port: Defining dependency "port" 00:01:54.868 Message: lib/pdump: Defining dependency "pdump" 00:01:54.868 Message: lib/table: Defining dependency "table" 00:01:54.868 Message: lib/pipeline: Defining dependency "pipeline" 00:01:54.868 Message: lib/graph: Defining dependency "graph" 00:01:54.868 Message: lib/node: Defining dependency "node" 00:01:54.868 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.868 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.868 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.868 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.868 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:54.868 Compiler for C supports arguments -Wno-unused-value: YES 00:01:56.242 Compiler for C supports arguments -Wno-format: YES 00:01:56.242 Compiler for C supports arguments -Wno-format-security: YES 00:01:56.242 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:56.242 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:56.242 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:56.242 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:56.242 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:56.242 Compiler for C supports arguments -mavx2: YES (cached) 00:01:56.242 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:56.242 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:56.242 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:56.242 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:56.242 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:56.242 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:56.242 Configuring doxy-api.conf using configuration 00:01:56.242 Program sphinx-build found: NO 00:01:56.242 Configuring rte_build_config.h using configuration 00:01:56.242 Message: 00:01:56.242 ================= 00:01:56.242 Applications Enabled 00:01:56.242 ================= 00:01:56.242 00:01:56.242 apps: 00:01:56.242 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:56.242 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:56.242 test-security-perf, 00:01:56.242 00:01:56.242 Message: 00:01:56.242 ================= 00:01:56.242 Libraries Enabled 00:01:56.242 ================= 00:01:56.242 00:01:56.242 libs: 00:01:56.242 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:56.242 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:56.242 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:56.242 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:56.242 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:56.242 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:56.242 table, pipeline, graph, node, 00:01:56.242 00:01:56.242 Message: 00:01:56.242 =============== 00:01:56.242 Drivers Enabled 00:01:56.242 =============== 00:01:56.242 00:01:56.242 common: 00:01:56.242 00:01:56.242 bus: 00:01:56.242 pci, vdev, 00:01:56.242 mempool: 00:01:56.242 ring, 00:01:56.242 dma: 00:01:56.242 00:01:56.242 net: 00:01:56.242 i40e, 00:01:56.242 raw: 00:01:56.242 00:01:56.242 crypto: 00:01:56.242 00:01:56.242 compress: 00:01:56.242 00:01:56.242 regex: 00:01:56.242 00:01:56.242 vdpa: 00:01:56.242 00:01:56.242 event: 00:01:56.242 00:01:56.242 baseband: 00:01:56.242 00:01:56.242 gpu: 00:01:56.242 00:01:56.242 00:01:56.242 Message: 00:01:56.242 ================= 00:01:56.242 Content Skipped 00:01:56.242 ================= 00:01:56.242 00:01:56.242 apps: 00:01:56.242 00:01:56.242 libs: 00:01:56.242 kni: explicitly disabled via build config (deprecated lib) 00:01:56.242 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:56.242 00:01:56.242 drivers: 00:01:56.242 common/cpt: not in enabled drivers build config 00:01:56.242 common/dpaax: not in enabled drivers build config 00:01:56.242 common/iavf: not in enabled drivers build config 00:01:56.242 common/idpf: not in enabled drivers build config 00:01:56.242 common/mvep: not in enabled drivers build config 00:01:56.242 common/octeontx: not in enabled drivers build config 00:01:56.242 bus/auxiliary: not in enabled drivers build config 00:01:56.242 bus/dpaa: not in enabled drivers build config 00:01:56.242 bus/fslmc: not in enabled drivers build config 00:01:56.242 bus/ifpga: not in enabled drivers build config 00:01:56.242 bus/vmbus: not in enabled drivers build config 00:01:56.242 common/cnxk: not in enabled drivers build config 00:01:56.242 common/mlx5: not in enabled drivers build config 00:01:56.242 common/qat: not in enabled drivers build config 00:01:56.242 common/sfc_efx: not in enabled drivers build config 00:01:56.242 mempool/bucket: not in enabled drivers build config 00:01:56.242 mempool/cnxk: not in enabled drivers build config 00:01:56.242 mempool/dpaa: not in enabled drivers build config 00:01:56.242 mempool/dpaa2: not in enabled drivers build config 00:01:56.242 mempool/octeontx: not in enabled drivers build config 00:01:56.242 mempool/stack: not in enabled drivers build config 00:01:56.242 dma/cnxk: not in enabled drivers build config 00:01:56.242 dma/dpaa: not in enabled drivers build config 00:01:56.242 dma/dpaa2: not in enabled drivers build config 00:01:56.242 dma/hisilicon: not in enabled drivers build config 00:01:56.242 dma/idxd: not in enabled drivers build config 00:01:56.242 dma/ioat: not in enabled drivers build config 00:01:56.242 dma/skeleton: not in enabled drivers build config 00:01:56.242 net/af_packet: not in enabled drivers build config 00:01:56.242 net/af_xdp: not in enabled drivers build config 00:01:56.242 net/ark: not in enabled drivers build config 00:01:56.242 net/atlantic: not in enabled drivers build config 00:01:56.242 net/avp: not in enabled drivers build config 00:01:56.242 net/axgbe: not in enabled drivers build config 00:01:56.242 net/bnx2x: not in enabled drivers build config 00:01:56.242 net/bnxt: not in enabled drivers build config 00:01:56.242 net/bonding: not in enabled drivers build config 00:01:56.242 net/cnxk: not in enabled drivers build config 00:01:56.242 net/cxgbe: not in enabled drivers build config 00:01:56.242 net/dpaa: not in enabled drivers build config 00:01:56.242 net/dpaa2: not in enabled drivers build config 00:01:56.242 net/e1000: not in enabled drivers build config 00:01:56.242 net/ena: not in enabled drivers build config 00:01:56.242 net/enetc: not in enabled drivers build config 00:01:56.242 net/enetfec: not in enabled drivers build config 00:01:56.242 net/enic: not in enabled drivers build config 00:01:56.242 net/failsafe: not in enabled drivers build config 00:01:56.242 net/fm10k: not in enabled drivers build config 00:01:56.242 net/gve: not in enabled drivers build config 00:01:56.242 net/hinic: not in enabled drivers build config 00:01:56.242 net/hns3: not in enabled drivers build config 00:01:56.242 net/iavf: not in enabled drivers build config 00:01:56.242 net/ice: not in enabled drivers build config 00:01:56.242 net/idpf: not in enabled drivers build config 00:01:56.242 net/igc: not in enabled drivers build config 00:01:56.242 net/ionic: not in enabled drivers build config 00:01:56.242 net/ipn3ke: not in enabled drivers build config 00:01:56.242 net/ixgbe: not in enabled drivers build config 00:01:56.242 net/kni: not in enabled drivers build config 00:01:56.242 net/liquidio: not in enabled drivers build config 00:01:56.242 net/mana: not in enabled drivers build config 00:01:56.242 net/memif: not in enabled drivers build config 00:01:56.242 net/mlx4: not in enabled drivers build config 00:01:56.242 net/mlx5: not in enabled drivers build config 00:01:56.242 net/mvneta: not in enabled drivers build config 00:01:56.242 net/mvpp2: not in enabled drivers build config 00:01:56.242 net/netvsc: not in enabled drivers build config 00:01:56.242 net/nfb: not in enabled drivers build config 00:01:56.242 net/nfp: not in enabled drivers build config 00:01:56.242 net/ngbe: not in enabled drivers build config 00:01:56.242 net/null: not in enabled drivers build config 00:01:56.242 net/octeontx: not in enabled drivers build config 00:01:56.242 net/octeon_ep: not in enabled drivers build config 00:01:56.242 net/pcap: not in enabled drivers build config 00:01:56.242 net/pfe: not in enabled drivers build config 00:01:56.242 net/qede: not in enabled drivers build config 00:01:56.242 net/ring: not in enabled drivers build config 00:01:56.242 net/sfc: not in enabled drivers build config 00:01:56.242 net/softnic: not in enabled drivers build config 00:01:56.242 net/tap: not in enabled drivers build config 00:01:56.242 net/thunderx: not in enabled drivers build config 00:01:56.242 net/txgbe: not in enabled drivers build config 00:01:56.242 net/vdev_netvsc: not in enabled drivers build config 00:01:56.242 net/vhost: not in enabled drivers build config 00:01:56.242 net/virtio: not in enabled drivers build config 00:01:56.242 net/vmxnet3: not in enabled drivers build config 00:01:56.242 raw/cnxk_bphy: not in enabled drivers build config 00:01:56.242 raw/cnxk_gpio: not in enabled drivers build config 00:01:56.242 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:56.242 raw/ifpga: not in enabled drivers build config 00:01:56.243 raw/ntb: not in enabled drivers build config 00:01:56.243 raw/skeleton: not in enabled drivers build config 00:01:56.243 crypto/armv8: not in enabled drivers build config 00:01:56.243 crypto/bcmfs: not in enabled drivers build config 00:01:56.243 crypto/caam_jr: not in enabled drivers build config 00:01:56.243 crypto/ccp: not in enabled drivers build config 00:01:56.243 crypto/cnxk: not in enabled drivers build config 00:01:56.243 crypto/dpaa_sec: not in enabled drivers build config 00:01:56.243 crypto/dpaa2_sec: not in enabled drivers build config 00:01:56.243 crypto/ipsec_mb: not in enabled drivers build config 00:01:56.243 crypto/mlx5: not in enabled drivers build config 00:01:56.243 crypto/mvsam: not in enabled drivers build config 00:01:56.243 crypto/nitrox: not in enabled drivers build config 00:01:56.243 crypto/null: not in enabled drivers build config 00:01:56.243 crypto/octeontx: not in enabled drivers build config 00:01:56.243 crypto/openssl: not in enabled drivers build config 00:01:56.243 crypto/scheduler: not in enabled drivers build config 00:01:56.243 crypto/uadk: not in enabled drivers build config 00:01:56.243 crypto/virtio: not in enabled drivers build config 00:01:56.243 compress/isal: not in enabled drivers build config 00:01:56.243 compress/mlx5: not in enabled drivers build config 00:01:56.243 compress/octeontx: not in enabled drivers build config 00:01:56.243 compress/zlib: not in enabled drivers build config 00:01:56.243 regex/mlx5: not in enabled drivers build config 00:01:56.243 regex/cn9k: not in enabled drivers build config 00:01:56.243 vdpa/ifc: not in enabled drivers build config 00:01:56.243 vdpa/mlx5: not in enabled drivers build config 00:01:56.243 vdpa/sfc: not in enabled drivers build config 00:01:56.243 event/cnxk: not in enabled drivers build config 00:01:56.243 event/dlb2: not in enabled drivers build config 00:01:56.243 event/dpaa: not in enabled drivers build config 00:01:56.243 event/dpaa2: not in enabled drivers build config 00:01:56.243 event/dsw: not in enabled drivers build config 00:01:56.243 event/opdl: not in enabled drivers build config 00:01:56.243 event/skeleton: not in enabled drivers build config 00:01:56.243 event/sw: not in enabled drivers build config 00:01:56.243 event/octeontx: not in enabled drivers build config 00:01:56.243 baseband/acc: not in enabled drivers build config 00:01:56.243 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:56.243 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:56.243 baseband/la12xx: not in enabled drivers build config 00:01:56.243 baseband/null: not in enabled drivers build config 00:01:56.243 baseband/turbo_sw: not in enabled drivers build config 00:01:56.243 gpu/cuda: not in enabled drivers build config 00:01:56.243 00:01:56.243 00:01:56.243 Build targets in project: 316 00:01:56.243 00:01:56.243 DPDK 22.11.4 00:01:56.243 00:01:56.243 User defined options 00:01:56.243 libdir : lib 00:01:56.243 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:56.243 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:56.243 c_link_args : 00:01:56.243 enable_docs : false 00:01:56.243 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:56.243 enable_kmods : false 00:01:56.243 machine : native 00:01:56.243 tests : false 00:01:56.243 00:01:56.243 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.243 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:56.243 23:26:30 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:56.243 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:56.509 [1/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:56.509 [2/745] Generating lib/rte_telemetry_def with a custom command 00:01:56.509 [3/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:56.509 [4/745] Generating lib/rte_kvargs_def with a custom command 00:01:56.509 [5/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:56.509 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:56.509 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:56.509 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:56.509 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:56.509 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:56.509 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:56.509 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:56.509 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:56.509 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:56.509 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:56.509 [16/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:56.509 [17/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:56.509 [18/745] Linking static target lib/librte_kvargs.a 00:01:56.509 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:56.509 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:56.509 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:56.509 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:56.509 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:56.769 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:56.769 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:56.769 [26/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:56.769 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:56.769 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:56.769 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:56.769 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:56.769 [31/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:56.769 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:56.769 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:56.769 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:56.769 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:56.769 [36/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:56.769 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:56.769 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:56.769 [39/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:56.769 [40/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:56.769 [41/745] Generating lib/rte_eal_def with a custom command 00:01:56.769 [42/745] Generating lib/rte_eal_mingw with a custom command 00:01:56.769 [43/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:56.769 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:56.769 [45/745] Generating lib/rte_ring_def with a custom command 00:01:56.769 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:56.769 [47/745] Generating lib/rte_ring_mingw with a custom command 00:01:56.769 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:56.769 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:56.769 [50/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:56.769 [51/745] Generating lib/rte_rcu_def with a custom command 00:01:56.769 [52/745] Generating lib/rte_rcu_mingw with a custom command 00:01:56.769 [53/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:56.769 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:56.769 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:56.769 [56/745] Generating lib/rte_mempool_mingw with a custom command 00:01:56.769 [57/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:56.769 [58/745] Generating lib/rte_mempool_def with a custom command 00:01:56.769 [59/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:56.769 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:56.769 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:56.769 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:56.769 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:56.770 [64/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:56.770 [65/745] Generating lib/rte_mbuf_def with a custom command 00:01:56.770 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:56.770 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:56.770 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:56.770 [69/745] Generating lib/rte_meter_def with a custom command 00:01:56.770 [70/745] Generating lib/rte_meter_mingw with a custom command 00:01:56.770 [71/745] Generating lib/rte_net_mingw with a custom command 00:01:56.770 [72/745] Generating lib/rte_net_def with a custom command 00:01:56.770 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:56.770 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:57.030 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:57.030 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:57.030 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:57.030 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:57.030 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.030 [80/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:57.030 [81/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:57.030 [82/745] Linking static target lib/librte_ring.a 00:01:57.030 [83/745] Linking target lib/librte_kvargs.so.23.0 00:01:57.030 [84/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:57.030 [85/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:57.030 [86/745] Linking static target lib/librte_meter.a 00:01:57.030 [87/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:57.030 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:57.030 [89/745] Generating lib/rte_pci_def with a custom command 00:01:57.291 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:57.291 [91/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:57.291 [92/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:57.291 [93/745] Linking static target lib/librte_pci.a 00:01:57.291 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:57.291 [95/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:57.291 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:57.291 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:57.291 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:57.549 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.549 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:57.549 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:57.549 [102/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.549 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:57.549 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:57.549 [105/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.549 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:57.549 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:57.549 [108/745] Generating lib/rte_cmdline_def with a custom command 00:01:57.549 [109/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:57.549 [110/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:57.549 [111/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:57.549 [112/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:57.549 [113/745] Linking static target lib/librte_telemetry.a 00:01:57.549 [114/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:57.549 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:57.549 [116/745] Generating lib/rte_metrics_def with a custom command 00:01:57.549 [117/745] Generating lib/rte_hash_def with a custom command 00:01:57.549 [118/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:57.549 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:57.810 [120/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:57.810 [121/745] Generating lib/rte_timer_def with a custom command 00:01:57.810 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:57.810 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:57.810 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:57.810 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.074 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.074 [127/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:58.074 [128/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.074 [129/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.074 [130/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.074 [131/745] Generating lib/rte_acl_def with a custom command 00:01:58.074 [132/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.074 [133/745] Generating lib/rte_acl_mingw with a custom command 00:01:58.074 [134/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.074 [135/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.074 [136/745] Generating lib/rte_bbdev_def with a custom command 00:01:58.074 [137/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.074 [138/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:58.074 [139/745] Generating lib/rte_bitratestats_def with a custom command 00:01:58.074 [140/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.074 [141/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.074 [142/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:58.074 [143/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:58.074 [144/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.074 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:58.074 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:58.332 [147/745] Linking target lib/librte_telemetry.so.23.0 00:01:58.332 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:58.332 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.332 [150/745] Generating lib/rte_bpf_def with a custom command 00:01:58.332 [151/745] Generating lib/rte_bpf_mingw with a custom command 00:01:58.332 [152/745] Generating lib/rte_cfgfile_def with a custom command 00:01:58.332 [153/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:58.332 [154/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:58.332 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:58.332 [156/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:58.332 [157/745] Generating lib/rte_compressdev_def with a custom command 00:01:58.332 [158/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.332 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:58.332 [160/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:58.332 [161/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.332 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:58.332 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:58.592 [164/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:58.593 [165/745] Generating lib/rte_cryptodev_def with a custom command 00:01:58.593 [166/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:58.593 [167/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:58.593 [168/745] Linking static target lib/librte_rcu.a 00:01:58.593 [169/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:58.593 [170/745] Generating lib/rte_distributor_def with a custom command 00:01:58.593 [171/745] Generating lib/rte_distributor_mingw with a custom command 00:01:58.593 [172/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:58.593 [173/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:58.593 [174/745] Linking static target lib/librte_cmdline.a 00:01:58.593 [175/745] Linking static target lib/librte_timer.a 00:01:58.593 [176/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.593 [177/745] Generating lib/rte_efd_def with a custom command 00:01:58.593 [178/745] Generating lib/rte_efd_mingw with a custom command 00:01:58.593 [179/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.593 [180/745] Linking static target lib/librte_net.a 00:01:58.593 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:58.853 [182/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:58.853 [183/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:58.853 [184/745] Linking static target lib/librte_cfgfile.a 00:01:58.853 [185/745] Linking static target lib/librte_metrics.a 00:01:58.853 [186/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.853 [187/745] Linking static target lib/librte_mempool.a 00:01:59.119 [188/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.119 [189/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:59.119 [190/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.119 [191/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:59.119 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:59.119 [193/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.119 [194/745] Linking static target lib/librte_eal.a 00:01:59.119 [195/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:59.119 [196/745] Generating lib/rte_eventdev_def with a custom command 00:01:59.382 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:59.382 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:59.382 [199/745] Generating lib/rte_gpudev_def with a custom command 00:01:59.382 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:59.382 [201/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:59.382 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:59.382 [203/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:59.382 [204/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:59.382 [205/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:59.382 [206/745] Linking static target lib/librte_bitratestats.a 00:01:59.382 [207/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.382 [208/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.382 [209/745] Generating lib/rte_gro_def with a custom command 00:01:59.382 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:59.644 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:59.644 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:59.644 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:59.644 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:59.644 [215/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:59.644 [216/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.644 [217/745] Generating lib/rte_gso_def with a custom command 00:01:59.644 [218/745] Generating lib/rte_gso_mingw with a custom command 00:01:59.906 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:59.906 [220/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:59.906 [221/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:59.906 [222/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:59.906 [223/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:59.906 [224/745] Linking static target lib/librte_bbdev.a 00:01:59.906 [225/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:59.906 [226/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.906 [227/745] Generating lib/rte_ip_frag_def with a custom command 00:01:59.906 [228/745] Generating lib/rte_ip_frag_mingw with a custom command 00:02:00.168 [229/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:00.168 [230/745] Generating lib/rte_jobstats_def with a custom command 00:02:00.168 [231/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.168 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:02:00.168 [233/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:00.168 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:02:00.168 [235/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:00.168 [236/745] Generating lib/rte_latencystats_def with a custom command 00:02:00.168 [237/745] Linking static target lib/librte_compressdev.a 00:02:00.168 [238/745] Generating lib/rte_lpm_def with a custom command 00:02:00.168 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:02:00.168 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:00.168 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:00.168 [242/745] Linking static target lib/librte_jobstats.a 00:02:00.430 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:00.430 [244/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:00.430 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:00.430 [246/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:00.430 [247/745] Linking static target lib/librte_distributor.a 00:02:00.699 [248/745] Generating lib/rte_member_def with a custom command 00:02:00.699 [249/745] Generating lib/rte_member_mingw with a custom command 00:02:00.699 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:00.699 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:00.699 [252/745] Generating lib/rte_pcapng_def with a custom command 00:02:00.699 [253/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.699 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:02:00.968 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:00.968 [256/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:00.968 [257/745] Linking static target lib/librte_bpf.a 00:02:00.968 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:00.968 [259/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:00.968 [260/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:00.968 [261/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.968 [262/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.968 [263/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:00.968 [264/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:00.968 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:00.968 [266/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:00.968 [267/745] Generating lib/rte_power_def with a custom command 00:02:00.968 [268/745] Generating lib/rte_power_mingw with a custom command 00:02:00.968 [269/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:00.968 [270/745] Linking static target lib/librte_gpudev.a 00:02:00.968 [271/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:00.968 [272/745] Generating lib/rte_rawdev_def with a custom command 00:02:00.968 [273/745] Generating lib/rte_rawdev_mingw with a custom command 00:02:00.968 [274/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:01.229 [275/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:01.229 [276/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:01.229 [277/745] Linking static target lib/librte_gro.a 00:02:01.229 [278/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.229 [279/745] Generating lib/rte_regexdev_def with a custom command 00:02:01.229 [280/745] Generating lib/rte_regexdev_mingw with a custom command 00:02:01.229 [281/745] Generating lib/rte_dmadev_def with a custom command 00:02:01.229 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:02:01.229 [283/745] Generating lib/rte_rib_def with a custom command 00:02:01.229 [284/745] Generating lib/rte_rib_mingw with a custom command 00:02:01.229 [285/745] Generating lib/rte_reorder_def with a custom command 00:02:01.491 [286/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.491 [287/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:01.491 [288/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:01.491 [289/745] Generating lib/rte_reorder_mingw with a custom command 00:02:01.491 [290/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:01.491 [291/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.491 [292/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:01.491 [293/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:01.491 [294/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:01.491 [295/745] Generating lib/rte_sched_def with a custom command 00:02:01.491 [296/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:01.491 [297/745] Generating lib/rte_sched_mingw with a custom command 00:02:01.491 [298/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:01.491 [299/745] Generating lib/rte_security_mingw with a custom command 00:02:01.491 [300/745] Generating lib/rte_security_def with a custom command 00:02:01.759 [301/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:01.759 [302/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:01.759 [303/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:01.759 [304/745] Generating lib/rte_stack_def with a custom command 00:02:01.759 [305/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.759 [306/745] Generating lib/rte_stack_mingw with a custom command 00:02:01.759 [307/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:01.759 [308/745] Linking static target lib/librte_latencystats.a 00:02:01.759 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:01.759 [310/745] Linking static target lib/librte_rawdev.a 00:02:01.759 [311/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:01.759 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:01.759 [313/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:01.759 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:01.759 [315/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:01.759 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:01.759 [317/745] Linking static target lib/librte_stack.a 00:02:01.759 [318/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:01.759 [319/745] Generating lib/rte_vhost_def with a custom command 00:02:01.759 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:02:01.759 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:01.759 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:02.028 [323/745] Linking static target lib/librte_dmadev.a 00:02:02.028 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:02.028 [325/745] Linking static target lib/librte_ip_frag.a 00:02:02.028 [326/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:02.028 [327/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.028 [328/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:02.028 [329/745] Generating lib/rte_ipsec_def with a custom command 00:02:02.029 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:02:02.029 [331/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.314 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:02.314 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:02.314 [334/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.598 [335/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.598 [336/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:02.598 [337/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.598 [338/745] Generating lib/rte_fib_def with a custom command 00:02:02.598 [339/745] Generating lib/rte_fib_mingw with a custom command 00:02:02.598 [340/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:02.598 [341/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:02.598 [342/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:02.598 [343/745] Linking static target lib/librte_gso.a 00:02:02.598 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:02.599 [345/745] Linking static target lib/librte_regexdev.a 00:02:02.865 [346/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:02.865 [347/745] Linking static target lib/librte_efd.a 00:02:02.865 [348/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.865 [349/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:02.865 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.865 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:02.865 [352/745] Linking static target lib/librte_pcapng.a 00:02:03.125 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:03.125 [354/745] Linking static target lib/librte_lpm.a 00:02:03.125 [355/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:03.125 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:03.125 [357/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:03.125 [358/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:03.125 [359/745] Linking static target lib/librte_reorder.a 00:02:03.125 [360/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.125 [361/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:03.390 [362/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:03.390 [363/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:03.390 [364/745] Linking static target lib/acl/libavx2_tmp.a 00:02:03.390 [365/745] Generating lib/rte_port_def with a custom command 00:02:03.390 [366/745] Generating lib/rte_port_mingw with a custom command 00:02:03.390 [367/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:03.390 [368/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:03.390 [369/745] Generating lib/rte_pdump_def with a custom command 00:02:03.390 [370/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:03.390 [371/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.390 [372/745] Generating lib/rte_pdump_mingw with a custom command 00:02:03.390 [373/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:03.390 [374/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:03.390 [375/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:03.390 [376/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:03.390 [377/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:03.650 [378/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:03.650 [379/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:03.650 [380/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:03.650 [381/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.650 [382/745] Linking static target lib/librte_security.a 00:02:03.650 [383/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.650 [384/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:03.650 [385/745] Linking static target lib/librte_power.a 00:02:03.650 [386/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:03.916 [387/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:03.916 [388/745] Linking static target lib/librte_hash.a 00:02:03.916 [389/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:03.916 [390/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.916 [391/745] Linking static target lib/librte_rib.a 00:02:03.916 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:03.916 [393/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:03.916 [394/745] Linking static target lib/acl/libavx512_tmp.a 00:02:04.178 [395/745] Linking static target lib/librte_acl.a 00:02:04.178 [396/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:04.178 [397/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:04.178 [398/745] Generating lib/rte_table_def with a custom command 00:02:04.178 [399/745] Generating lib/rte_table_mingw with a custom command 00:02:04.443 [400/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:04.443 [401/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.443 [402/745] Linking static target lib/librte_ethdev.a 00:02:04.443 [403/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:04.443 [404/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:04.443 [405/745] Linking static target lib/librte_mbuf.a 00:02:04.443 [406/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.708 [407/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.708 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:04.708 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:04.708 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:04.708 [411/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.708 [412/745] Generating lib/rte_pipeline_def with a custom command 00:02:04.708 [413/745] Generating lib/rte_pipeline_mingw with a custom command 00:02:04.708 [414/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:04.708 [415/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:04.708 [416/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:04.972 [417/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:04.972 [418/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:04.972 [419/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:04.972 [420/745] Generating lib/rte_graph_mingw with a custom command 00:02:04.972 [421/745] Generating lib/rte_graph_def with a custom command 00:02:04.972 [422/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:04.972 [423/745] Linking static target lib/librte_fib.a 00:02:04.972 [424/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.973 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:05.235 [426/745] Linking static target lib/librte_member.a 00:02:05.235 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:05.235 [428/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.235 [429/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:05.235 [430/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:05.235 [431/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:05.235 [432/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:05.235 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:05.235 [434/745] Linking static target lib/librte_eventdev.a 00:02:05.235 [435/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:05.235 [436/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:05.235 [437/745] Generating lib/rte_node_def with a custom command 00:02:05.235 [438/745] Generating lib/rte_node_mingw with a custom command 00:02:05.495 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:05.495 [440/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.495 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:05.495 [442/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.495 [443/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:05.495 [444/745] Generating drivers/rte_bus_pci_def with a custom command 00:02:05.495 [445/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.495 [446/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:05.765 [447/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:05.765 [448/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:05.765 [449/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.765 [450/745] Linking static target lib/librte_sched.a 00:02:05.765 [451/745] Generating drivers/rte_bus_vdev_def with a custom command 00:02:05.765 [452/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:05.765 [453/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:05.765 [454/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:05.765 [455/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:05.765 [456/745] Linking static target lib/librte_cryptodev.a 00:02:05.765 [457/745] Generating drivers/rte_mempool_ring_def with a custom command 00:02:05.765 [458/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:05.765 [459/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:05.765 [460/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:05.765 [461/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:06.026 [462/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:06.026 [463/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:06.026 [464/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:06.026 [465/745] Linking static target lib/librte_pdump.a 00:02:06.026 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:06.026 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:06.026 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:06.026 [469/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:06.026 [470/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:06.026 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:06.026 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:06.026 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:06.288 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:06.288 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:06.288 [476/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:06.288 [477/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:06.288 [478/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.553 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:06.553 [480/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:06.553 [481/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.553 [482/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:06.553 [483/745] Generating drivers/rte_net_i40e_def with a custom command 00:02:06.553 [484/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:06.553 [485/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:06.553 [486/745] Linking static target lib/librte_table.a 00:02:06.553 [487/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.553 [488/745] Linking static target drivers/librte_bus_vdev.a 00:02:06.553 [489/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:06.553 [490/745] Linking static target lib/librte_ipsec.a 00:02:06.814 [491/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:06.814 [492/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.815 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:06.815 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:07.077 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.077 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:07.077 [497/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:07.077 [498/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:07.077 [499/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:07.077 [500/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:07.077 [501/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.077 [502/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:07.343 [503/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:07.343 [504/745] Linking static target lib/librte_graph.a 00:02:07.343 [505/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:07.343 [506/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:07.343 [507/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:07.343 [508/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.343 [509/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:07.343 [510/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.343 [511/745] Linking static target drivers/librte_bus_pci.a 00:02:07.603 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:07.603 [513/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.603 [514/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:07.870 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:08.129 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.129 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:08.129 [518/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:08.129 [519/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:08.129 [520/745] Linking static target lib/librte_port.a 00:02:08.129 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:08.391 [522/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:08.391 [523/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:08.391 [524/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:08.391 [525/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.391 [526/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:08.659 [527/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:08.659 [528/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:08.659 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:08.659 [530/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.659 [531/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.659 [532/745] Linking static target drivers/librte_mempool_ring.a 00:02:08.921 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.921 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:08.921 [535/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:08.921 [536/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:08.921 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:08.921 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:09.181 [539/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.181 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:09.181 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.443 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:09.443 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:09.706 [544/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:09.706 [545/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:09.706 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:09.706 [547/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:09.969 [548/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:09.969 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:09.969 [550/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:09.969 [551/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:10.233 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:10.233 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:10.233 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:10.233 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:10.495 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:10.495 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:10.758 [558/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:10.758 [559/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:11.020 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:11.020 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:11.020 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:11.020 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:11.020 [564/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:11.020 [565/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:11.020 [566/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:11.283 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:11.283 [568/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:11.283 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:11.283 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:11.283 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:11.544 [572/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:11.544 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:11.544 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:11.544 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:11.806 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:11.806 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:11.806 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:11.806 [579/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:12.071 [580/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:12.071 [581/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.071 [582/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:12.071 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:12.071 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:12.071 [585/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:12.071 [586/745] Linking target lib/librte_eal.so.23.0 00:02:12.333 [587/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.333 [588/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:12.592 [589/745] Linking target lib/librte_ring.so.23.0 00:02:12.592 [590/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:12.592 [591/745] Linking target lib/librte_meter.so.23.0 00:02:12.592 [592/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:12.592 [593/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:12.856 [594/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:12.856 [595/745] Linking target lib/librte_pci.so.23.0 00:02:12.856 [596/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:12.856 [597/745] Linking target lib/librte_rcu.so.23.0 00:02:12.856 [598/745] Linking target lib/librte_timer.so.23.0 00:02:12.856 [599/745] Linking target lib/librte_mempool.so.23.0 00:02:12.856 [600/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:12.856 [601/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:12.856 [602/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:12.856 [603/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:12.856 [604/745] Linking target lib/librte_cfgfile.so.23.0 00:02:12.856 [605/745] Linking target lib/librte_jobstats.so.23.0 00:02:12.856 [606/745] Linking target lib/librte_acl.so.23.0 00:02:12.856 [607/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:12.856 [608/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:13.119 [609/745] Linking target lib/librte_rawdev.so.23.0 00:02:13.119 [610/745] Linking target lib/librte_dmadev.so.23.0 00:02:13.119 [611/745] Linking target lib/librte_stack.so.23.0 00:02:13.119 [612/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:13.119 [613/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:13.119 [614/745] Linking target lib/librte_graph.so.23.0 00:02:13.119 [615/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:13.119 [616/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:13.119 [617/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:13.119 [618/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:13.119 [619/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:13.119 [620/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:13.119 [621/745] Linking target lib/librte_rib.so.23.0 00:02:13.119 [622/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:13.119 [623/745] Linking target lib/librte_mbuf.so.23.0 00:02:13.119 [624/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:13.119 [625/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:13.377 [626/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:13.377 [627/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:13.377 [628/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:13.377 [629/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:13.377 [630/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:13.377 [631/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:13.377 [632/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:13.377 [633/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:13.377 [634/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:13.377 [635/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:13.377 [636/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:13.377 [637/745] Linking target lib/librte_reorder.so.23.0 00:02:13.377 [638/745] Linking target lib/librte_distributor.so.23.0 00:02:13.377 [639/745] Linking target lib/librte_fib.so.23.0 00:02:13.377 [640/745] Linking target lib/librte_gpudev.so.23.0 00:02:13.377 [641/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:13.377 [642/745] Linking target lib/librte_bbdev.so.23.0 00:02:13.377 [643/745] Linking target lib/librte_compressdev.so.23.0 00:02:13.377 [644/745] Linking target lib/librte_regexdev.so.23.0 00:02:13.377 [645/745] Linking target lib/librte_net.so.23.0 00:02:13.377 [646/745] Linking target lib/librte_sched.so.23.0 00:02:13.377 [647/745] Linking target lib/librte_cryptodev.so.23.0 00:02:13.635 [648/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:13.635 [649/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:13.635 [650/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:13.635 [651/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:13.635 [652/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:13.635 [653/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:13.635 [654/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:13.635 [655/745] Linking target lib/librte_cmdline.so.23.0 00:02:13.635 [656/745] Linking target lib/librte_security.so.23.0 00:02:13.635 [657/745] Linking target lib/librte_hash.so.23.0 00:02:13.635 [658/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:13.635 [659/745] Linking target lib/librte_ethdev.so.23.0 00:02:13.892 [660/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:13.892 [661/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:13.892 [662/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:13.892 [663/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:13.892 [664/745] Linking target lib/librte_efd.so.23.0 00:02:13.892 [665/745] Linking target lib/librte_lpm.so.23.0 00:02:13.892 [666/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:13.892 [667/745] Linking target lib/librte_member.so.23.0 00:02:13.892 [668/745] Linking target lib/librte_ipsec.so.23.0 00:02:13.892 [669/745] Linking target lib/librte_pcapng.so.23.0 00:02:13.892 [670/745] Linking target lib/librte_gso.so.23.0 00:02:13.892 [671/745] Linking target lib/librte_bpf.so.23.0 00:02:13.892 [672/745] Linking target lib/librte_eventdev.so.23.0 00:02:13.892 [673/745] Linking target lib/librte_metrics.so.23.0 00:02:13.892 [674/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:13.892 [675/745] Linking target lib/librte_ip_frag.so.23.0 00:02:13.892 [676/745] Linking target lib/librte_gro.so.23.0 00:02:13.892 [677/745] Linking target lib/librte_power.so.23.0 00:02:13.892 [678/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:13.892 [679/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:14.150 [680/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:14.150 [681/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:14.150 [682/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:14.150 [683/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:14.150 [684/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:14.150 [685/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:14.150 [686/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:14.150 [687/745] Linking target lib/librte_port.so.23.0 00:02:14.150 [688/745] Linking target lib/librte_pdump.so.23.0 00:02:14.150 [689/745] Linking target lib/librte_latencystats.so.23.0 00:02:14.150 [690/745] Linking target lib/librte_bitratestats.so.23.0 00:02:14.150 [691/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:14.408 [692/745] Linking target lib/librte_table.so.23.0 00:02:14.408 [693/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:14.408 [694/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:14.408 [695/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:14.666 [696/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:14.666 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:14.924 [698/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:14.924 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:15.182 [700/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:15.182 [701/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:15.440 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:15.440 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:15.440 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:15.697 [705/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:15.698 [706/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:15.698 [707/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:15.698 [708/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:15.698 [709/745] Linking static target drivers/librte_net_i40e.a 00:02:15.955 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:16.215 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.473 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:02:17.038 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:17.038 [714/745] Linking static target lib/librte_node.a 00:02:17.038 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.038 [716/745] Linking target lib/librte_node.so.23.0 00:02:17.604 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:18.169 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:18.733 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:26.839 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:58.961 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:58.961 [722/745] Linking static target lib/librte_vhost.a 00:02:58.961 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.961 [724/745] Linking target lib/librte_vhost.so.23.0 00:03:08.931 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:08.931 [726/745] Linking static target lib/librte_pipeline.a 00:03:08.931 [727/745] Linking target app/dpdk-test-sad 00:03:08.931 [728/745] Linking target app/dpdk-pdump 00:03:08.931 [729/745] Linking target app/dpdk-test-gpudev 00:03:08.931 [730/745] Linking target app/dpdk-dumpcap 00:03:08.931 [731/745] Linking target app/dpdk-test-fib 00:03:08.931 [732/745] Linking target app/dpdk-test-regex 00:03:08.931 [733/745] Linking target app/dpdk-test-cmdline 00:03:08.931 [734/745] Linking target app/dpdk-test-pipeline 00:03:08.931 [735/745] Linking target app/dpdk-test-flow-perf 00:03:08.931 [736/745] Linking target app/dpdk-proc-info 00:03:08.931 [737/745] Linking target app/dpdk-test-security-perf 00:03:08.931 [738/745] Linking target app/dpdk-test-crypto-perf 00:03:08.931 [739/745] Linking target app/dpdk-test-eventdev 00:03:08.931 [740/745] Linking target app/dpdk-test-bbdev 00:03:08.931 [741/745] Linking target app/dpdk-test-compress-perf 00:03:08.931 [742/745] Linking target app/dpdk-test-acl 00:03:08.931 [743/745] Linking target app/dpdk-testpmd 00:03:10.830 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.830 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:10.831 23:27:45 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:10.831 23:27:45 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:10.831 23:27:45 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:11.089 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:11.089 [0/1] Installing files. 00:03:11.350 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.350 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.351 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.352 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.353 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.354 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.355 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:11.356 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.356 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.615 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.615 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.615 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.615 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.615 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.615 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.615 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.615 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.615 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.615 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.615 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.615 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.616 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.876 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.876 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.876 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.877 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.877 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.877 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.877 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.877 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.877 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.877 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:11.877 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.877 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.877 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.878 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.879 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:11.880 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:12.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:12.141 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:12.141 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:12.141 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:12.141 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:12.141 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:12.141 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:12.141 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:12.141 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:12.141 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:12.141 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:12.141 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:12.141 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:12.141 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:12.141 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:12.141 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:12.141 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:12.141 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:12.141 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:12.141 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:12.141 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:12.141 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:12.141 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:12.141 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:12.141 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:12.141 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:12.141 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:12.141 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:12.141 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:12.141 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:12.141 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:12.141 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:12.141 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:12.141 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:12.141 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:12.141 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:12.141 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:12.141 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:12.141 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:12.141 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:12.141 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:12.141 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:12.141 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:12.141 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:12.141 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:12.141 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:12.141 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:12.141 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:12.141 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:12.141 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:12.141 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:12.141 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:12.141 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:12.141 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:12.141 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:12.141 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:12.141 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:12.141 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:12.141 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:12.141 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:12.141 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:12.141 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:12.141 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:12.141 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:12.141 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:12.141 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:12.141 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:12.141 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:12.141 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:12.141 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:12.141 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:12.141 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:12.141 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:12.141 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:12.141 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:12.141 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:12.141 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:12.141 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:12.141 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:12.142 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:12.142 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:12.142 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:12.142 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:12.142 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:12.142 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:12.142 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:12.142 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:12.142 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:12.142 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:12.142 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:12.142 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:12.142 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:12.142 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:12.142 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:12.142 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:12.142 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:12.142 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:12.142 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:12.142 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:12.142 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:12.142 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:12.142 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:12.142 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:12.142 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:12.142 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:12.142 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:12.142 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:12.142 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:12.142 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:12.142 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:12.142 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:12.142 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:12.142 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:12.142 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:12.142 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:12.142 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:12.142 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:12.142 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:12.142 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:12.142 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:12.142 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:12.142 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:12.142 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:12.142 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:12.142 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:12.142 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:12.142 23:27:46 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:12.142 23:27:46 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.142 00:03:12.142 real 1m21.590s 00:03:12.142 user 14m25.476s 00:03:12.142 sys 1m50.021s 00:03:12.142 23:27:46 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:12.142 23:27:46 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:12.142 ************************************ 00:03:12.142 END TEST build_native_dpdk 00:03:12.142 ************************************ 00:03:12.142 23:27:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:12.142 23:27:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:12.142 23:27:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:12.142 23:27:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:12.142 23:27:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:12.142 23:27:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:12.142 23:27:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:12.142 23:27:46 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:12.142 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:12.142 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:12.142 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:12.400 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:12.659 Using 'verbs' RDMA provider 00:03:23.196 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:33.173 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:33.173 Creating mk/config.mk...done. 00:03:33.173 Creating mk/cc.flags.mk...done. 00:03:33.173 Type 'make' to build. 00:03:33.173 23:28:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:33.173 23:28:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:33.173 23:28:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:33.173 23:28:06 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.173 ************************************ 00:03:33.173 START TEST make 00:03:33.173 ************************************ 00:03:33.173 23:28:06 make -- common/autotest_common.sh@1129 -- $ make -j48 00:03:33.173 make[1]: Nothing to be done for 'all'. 00:03:34.119 The Meson build system 00:03:34.119 Version: 1.5.0 00:03:34.119 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:34.119 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:34.119 Build type: native build 00:03:34.119 Project name: libvfio-user 00:03:34.119 Project version: 0.0.1 00:03:34.119 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:34.119 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:34.119 Host machine cpu family: x86_64 00:03:34.119 Host machine cpu: x86_64 00:03:34.119 Run-time dependency threads found: YES 00:03:34.119 Library dl found: YES 00:03:34.119 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:34.119 Run-time dependency json-c found: YES 0.17 00:03:34.119 Run-time dependency cmocka found: YES 1.1.7 00:03:34.119 Program pytest-3 found: NO 00:03:34.119 Program flake8 found: NO 00:03:34.119 Program misspell-fixer found: NO 00:03:34.119 Program restructuredtext-lint found: NO 00:03:34.119 Program valgrind found: YES (/usr/bin/valgrind) 00:03:34.119 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:34.119 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:34.119 Compiler for C supports arguments -Wwrite-strings: YES 00:03:34.119 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:34.119 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:34.119 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:34.119 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:34.119 Build targets in project: 8 00:03:34.119 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:34.119 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:34.119 00:03:34.119 libvfio-user 0.0.1 00:03:34.119 00:03:34.119 User defined options 00:03:34.119 buildtype : debug 00:03:34.119 default_library: shared 00:03:34.119 libdir : /usr/local/lib 00:03:34.119 00:03:34.119 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:35.081 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:35.081 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:35.082 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:35.082 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:35.082 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:35.082 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:35.082 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:35.082 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:35.082 [8/37] Compiling C object samples/null.p/null.c.o 00:03:35.082 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:35.082 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:35.082 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:35.082 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:35.082 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:35.082 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:35.082 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:35.082 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:35.082 [17/37] Compiling C object samples/server.p/server.c.o 00:03:35.082 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:35.082 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:35.353 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:35.353 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:35.353 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:35.353 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:35.353 [24/37] Compiling C object samples/client.p/client.c.o 00:03:35.353 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:35.353 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:35.353 [27/37] Linking target samples/client 00:03:35.353 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:35.353 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:35.353 [30/37] Linking target test/unit_tests 00:03:35.353 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:35.614 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:35.614 [33/37] Linking target samples/server 00:03:35.614 [34/37] Linking target samples/lspci 00:03:35.614 [35/37] Linking target samples/gpio-pci-idio-16 00:03:35.614 [36/37] Linking target samples/null 00:03:35.614 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:35.614 INFO: autodetecting backend as ninja 00:03:35.614 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:35.875 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:36.816 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:36.816 ninja: no work to do. 00:04:15.547 CC lib/log/log.o 00:04:15.547 CC lib/log/log_flags.o 00:04:15.547 CC lib/ut/ut.o 00:04:15.547 CC lib/log/log_deprecated.o 00:04:15.547 CC lib/ut_mock/mock.o 00:04:15.547 LIB libspdk_ut.a 00:04:15.547 LIB libspdk_ut_mock.a 00:04:15.547 LIB libspdk_log.a 00:04:15.547 SO libspdk_ut.so.2.0 00:04:15.547 SO libspdk_ut_mock.so.6.0 00:04:15.547 SO libspdk_log.so.7.1 00:04:15.547 SYMLINK libspdk_ut.so 00:04:15.547 SYMLINK libspdk_ut_mock.so 00:04:15.547 SYMLINK libspdk_log.so 00:04:15.547 CXX lib/trace_parser/trace.o 00:04:15.547 CC lib/ioat/ioat.o 00:04:15.547 CC lib/dma/dma.o 00:04:15.547 CC lib/util/base64.o 00:04:15.547 CC lib/util/bit_array.o 00:04:15.547 CC lib/util/cpuset.o 00:04:15.547 CC lib/util/crc16.o 00:04:15.547 CC lib/util/crc32.o 00:04:15.547 CC lib/util/crc32c.o 00:04:15.547 CC lib/util/crc32_ieee.o 00:04:15.547 CC lib/util/crc64.o 00:04:15.547 CC lib/util/dif.o 00:04:15.547 CC lib/util/fd.o 00:04:15.547 CC lib/util/fd_group.o 00:04:15.547 CC lib/util/file.o 00:04:15.547 CC lib/util/hexlify.o 00:04:15.547 CC lib/util/iov.o 00:04:15.547 CC lib/util/math.o 00:04:15.547 CC lib/util/net.o 00:04:15.547 CC lib/util/pipe.o 00:04:15.547 CC lib/util/strerror_tls.o 00:04:15.547 CC lib/util/string.o 00:04:15.547 CC lib/util/uuid.o 00:04:15.547 CC lib/util/xor.o 00:04:15.547 CC lib/util/zipf.o 00:04:15.547 CC lib/util/md5.o 00:04:15.547 CC lib/vfio_user/host/vfio_user_pci.o 00:04:15.547 CC lib/vfio_user/host/vfio_user.o 00:04:15.547 LIB libspdk_dma.a 00:04:15.547 LIB libspdk_ioat.a 00:04:15.547 SO libspdk_dma.so.5.0 00:04:15.547 SO libspdk_ioat.so.7.0 00:04:15.547 SYMLINK libspdk_dma.so 00:04:15.547 SYMLINK libspdk_ioat.so 00:04:15.547 LIB libspdk_vfio_user.a 00:04:15.547 SO libspdk_vfio_user.so.5.0 00:04:15.547 SYMLINK libspdk_vfio_user.so 00:04:15.547 LIB libspdk_util.a 00:04:15.547 SO libspdk_util.so.10.1 00:04:15.547 SYMLINK libspdk_util.so 00:04:15.547 CC lib/conf/conf.o 00:04:15.547 CC lib/rdma_utils/rdma_utils.o 00:04:15.547 CC lib/env_dpdk/env.o 00:04:15.547 CC lib/json/json_parse.o 00:04:15.547 CC lib/idxd/idxd.o 00:04:15.547 CC lib/env_dpdk/memory.o 00:04:15.547 CC lib/json/json_util.o 00:04:15.547 CC lib/idxd/idxd_user.o 00:04:15.547 CC lib/env_dpdk/pci.o 00:04:15.547 CC lib/json/json_write.o 00:04:15.547 CC lib/env_dpdk/init.o 00:04:15.547 CC lib/idxd/idxd_kernel.o 00:04:15.547 CC lib/env_dpdk/threads.o 00:04:15.547 CC lib/vmd/vmd.o 00:04:15.547 CC lib/env_dpdk/pci_ioat.o 00:04:15.547 CC lib/vmd/led.o 00:04:15.547 CC lib/env_dpdk/pci_virtio.o 00:04:15.547 CC lib/env_dpdk/pci_vmd.o 00:04:15.547 CC lib/env_dpdk/pci_idxd.o 00:04:15.547 CC lib/env_dpdk/pci_event.o 00:04:15.547 CC lib/env_dpdk/pci_dpdk.o 00:04:15.547 CC lib/env_dpdk/sigbus_handler.o 00:04:15.547 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:15.547 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:15.547 LIB libspdk_trace_parser.a 00:04:15.547 SO libspdk_trace_parser.so.6.0 00:04:15.547 SYMLINK libspdk_trace_parser.so 00:04:15.547 LIB libspdk_conf.a 00:04:15.547 SO libspdk_conf.so.6.0 00:04:15.547 LIB libspdk_rdma_utils.a 00:04:15.547 LIB libspdk_json.a 00:04:15.547 SO libspdk_rdma_utils.so.1.0 00:04:15.547 SYMLINK libspdk_conf.so 00:04:15.547 SO libspdk_json.so.6.0 00:04:15.547 SYMLINK libspdk_rdma_utils.so 00:04:15.547 SYMLINK libspdk_json.so 00:04:15.547 CC lib/rdma_provider/common.o 00:04:15.547 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:15.547 CC lib/jsonrpc/jsonrpc_server.o 00:04:15.547 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:15.547 CC lib/jsonrpc/jsonrpc_client.o 00:04:15.547 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:15.547 LIB libspdk_idxd.a 00:04:15.547 SO libspdk_idxd.so.12.1 00:04:15.547 LIB libspdk_vmd.a 00:04:15.547 SYMLINK libspdk_idxd.so 00:04:15.547 SO libspdk_vmd.so.6.0 00:04:15.547 SYMLINK libspdk_vmd.so 00:04:15.547 LIB libspdk_rdma_provider.a 00:04:15.547 SO libspdk_rdma_provider.so.7.0 00:04:15.547 LIB libspdk_jsonrpc.a 00:04:15.547 SYMLINK libspdk_rdma_provider.so 00:04:15.547 SO libspdk_jsonrpc.so.6.0 00:04:15.547 SYMLINK libspdk_jsonrpc.so 00:04:15.547 CC lib/rpc/rpc.o 00:04:15.547 LIB libspdk_rpc.a 00:04:15.547 SO libspdk_rpc.so.6.0 00:04:15.547 SYMLINK libspdk_rpc.so 00:04:15.547 CC lib/keyring/keyring.o 00:04:15.547 CC lib/keyring/keyring_rpc.o 00:04:15.547 CC lib/notify/notify.o 00:04:15.547 CC lib/trace/trace.o 00:04:15.547 CC lib/notify/notify_rpc.o 00:04:15.547 CC lib/trace/trace_flags.o 00:04:15.547 CC lib/trace/trace_rpc.o 00:04:15.547 LIB libspdk_notify.a 00:04:15.547 SO libspdk_notify.so.6.0 00:04:15.547 SYMLINK libspdk_notify.so 00:04:15.547 LIB libspdk_keyring.a 00:04:15.547 LIB libspdk_trace.a 00:04:15.547 SO libspdk_keyring.so.2.0 00:04:15.547 SO libspdk_trace.so.11.0 00:04:15.547 SYMLINK libspdk_keyring.so 00:04:15.547 SYMLINK libspdk_trace.so 00:04:15.547 LIB libspdk_env_dpdk.a 00:04:15.547 CC lib/sock/sock.o 00:04:15.547 CC lib/thread/thread.o 00:04:15.547 CC lib/sock/sock_rpc.o 00:04:15.547 CC lib/thread/iobuf.o 00:04:15.547 SO libspdk_env_dpdk.so.15.1 00:04:15.547 SYMLINK libspdk_env_dpdk.so 00:04:15.547 LIB libspdk_sock.a 00:04:15.547 SO libspdk_sock.so.10.0 00:04:15.547 SYMLINK libspdk_sock.so 00:04:15.805 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:15.805 CC lib/nvme/nvme_ctrlr.o 00:04:15.805 CC lib/nvme/nvme_fabric.o 00:04:15.805 CC lib/nvme/nvme_ns_cmd.o 00:04:15.805 CC lib/nvme/nvme_ns.o 00:04:15.805 CC lib/nvme/nvme_pcie_common.o 00:04:15.805 CC lib/nvme/nvme_pcie.o 00:04:15.805 CC lib/nvme/nvme_qpair.o 00:04:15.805 CC lib/nvme/nvme.o 00:04:15.805 CC lib/nvme/nvme_quirks.o 00:04:15.805 CC lib/nvme/nvme_transport.o 00:04:15.805 CC lib/nvme/nvme_discovery.o 00:04:15.805 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:15.805 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:15.805 CC lib/nvme/nvme_tcp.o 00:04:15.805 CC lib/nvme/nvme_opal.o 00:04:15.805 CC lib/nvme/nvme_io_msg.o 00:04:15.805 CC lib/nvme/nvme_poll_group.o 00:04:15.805 CC lib/nvme/nvme_zns.o 00:04:15.805 CC lib/nvme/nvme_stubs.o 00:04:15.805 CC lib/nvme/nvme_auth.o 00:04:15.805 CC lib/nvme/nvme_cuse.o 00:04:15.805 CC lib/nvme/nvme_vfio_user.o 00:04:15.805 CC lib/nvme/nvme_rdma.o 00:04:16.832 LIB libspdk_thread.a 00:04:16.832 SO libspdk_thread.so.11.0 00:04:16.832 SYMLINK libspdk_thread.so 00:04:17.091 CC lib/accel/accel.o 00:04:17.091 CC lib/blob/blobstore.o 00:04:17.091 CC lib/init/json_config.o 00:04:17.091 CC lib/virtio/virtio.o 00:04:17.091 CC lib/fsdev/fsdev.o 00:04:17.091 CC lib/vfu_tgt/tgt_endpoint.o 00:04:17.091 CC lib/accel/accel_rpc.o 00:04:17.091 CC lib/init/subsystem.o 00:04:17.091 CC lib/blob/request.o 00:04:17.091 CC lib/vfu_tgt/tgt_rpc.o 00:04:17.091 CC lib/virtio/virtio_vhost_user.o 00:04:17.091 CC lib/fsdev/fsdev_io.o 00:04:17.091 CC lib/blob/zeroes.o 00:04:17.091 CC lib/init/subsystem_rpc.o 00:04:17.091 CC lib/accel/accel_sw.o 00:04:17.091 CC lib/virtio/virtio_vfio_user.o 00:04:17.091 CC lib/blob/blob_bs_dev.o 00:04:17.091 CC lib/virtio/virtio_pci.o 00:04:17.091 CC lib/fsdev/fsdev_rpc.o 00:04:17.091 CC lib/init/rpc.o 00:04:17.350 LIB libspdk_init.a 00:04:17.350 SO libspdk_init.so.6.0 00:04:17.350 LIB libspdk_virtio.a 00:04:17.350 LIB libspdk_vfu_tgt.a 00:04:17.350 SYMLINK libspdk_init.so 00:04:17.608 SO libspdk_vfu_tgt.so.3.0 00:04:17.608 SO libspdk_virtio.so.7.0 00:04:17.608 SYMLINK libspdk_vfu_tgt.so 00:04:17.608 SYMLINK libspdk_virtio.so 00:04:17.608 CC lib/event/app.o 00:04:17.608 CC lib/event/reactor.o 00:04:17.608 CC lib/event/log_rpc.o 00:04:17.608 CC lib/event/app_rpc.o 00:04:17.608 CC lib/event/scheduler_static.o 00:04:17.867 LIB libspdk_fsdev.a 00:04:17.867 SO libspdk_fsdev.so.2.0 00:04:17.867 SYMLINK libspdk_fsdev.so 00:04:18.125 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:18.125 LIB libspdk_event.a 00:04:18.125 SO libspdk_event.so.14.0 00:04:18.125 SYMLINK libspdk_event.so 00:04:18.383 LIB libspdk_accel.a 00:04:18.383 SO libspdk_accel.so.16.0 00:04:18.383 LIB libspdk_nvme.a 00:04:18.383 SYMLINK libspdk_accel.so 00:04:18.383 SO libspdk_nvme.so.15.0 00:04:18.641 CC lib/bdev/bdev.o 00:04:18.641 CC lib/bdev/bdev_rpc.o 00:04:18.641 CC lib/bdev/bdev_zone.o 00:04:18.641 CC lib/bdev/part.o 00:04:18.641 CC lib/bdev/scsi_nvme.o 00:04:18.641 SYMLINK libspdk_nvme.so 00:04:18.641 LIB libspdk_fuse_dispatcher.a 00:04:18.641 SO libspdk_fuse_dispatcher.so.1.0 00:04:18.899 SYMLINK libspdk_fuse_dispatcher.so 00:04:20.272 LIB libspdk_blob.a 00:04:20.272 SO libspdk_blob.so.11.0 00:04:20.272 SYMLINK libspdk_blob.so 00:04:20.530 CC lib/lvol/lvol.o 00:04:20.530 CC lib/blobfs/blobfs.o 00:04:20.530 CC lib/blobfs/tree.o 00:04:21.097 LIB libspdk_bdev.a 00:04:21.356 SO libspdk_bdev.so.17.0 00:04:21.356 SYMLINK libspdk_bdev.so 00:04:21.356 LIB libspdk_blobfs.a 00:04:21.356 SO libspdk_blobfs.so.10.0 00:04:21.356 SYMLINK libspdk_blobfs.so 00:04:21.625 LIB libspdk_lvol.a 00:04:21.625 CC lib/scsi/dev.o 00:04:21.625 CC lib/nbd/nbd.o 00:04:21.625 CC lib/nvmf/ctrlr.o 00:04:21.625 CC lib/nbd/nbd_rpc.o 00:04:21.625 CC lib/ublk/ublk.o 00:04:21.625 CC lib/nvmf/ctrlr_discovery.o 00:04:21.625 CC lib/scsi/lun.o 00:04:21.625 CC lib/nvmf/ctrlr_bdev.o 00:04:21.625 CC lib/ublk/ublk_rpc.o 00:04:21.625 CC lib/scsi/port.o 00:04:21.625 CC lib/ftl/ftl_core.o 00:04:21.625 CC lib/nvmf/subsystem.o 00:04:21.625 CC lib/scsi/scsi.o 00:04:21.625 CC lib/nvmf/nvmf.o 00:04:21.625 CC lib/ftl/ftl_init.o 00:04:21.625 CC lib/scsi/scsi_bdev.o 00:04:21.625 CC lib/ftl/ftl_layout.o 00:04:21.625 CC lib/nvmf/nvmf_rpc.o 00:04:21.625 CC lib/scsi/scsi_pr.o 00:04:21.625 CC lib/nvmf/transport.o 00:04:21.625 CC lib/ftl/ftl_debug.o 00:04:21.625 CC lib/nvmf/tcp.o 00:04:21.625 CC lib/scsi/scsi_rpc.o 00:04:21.625 CC lib/ftl/ftl_io.o 00:04:21.625 CC lib/nvmf/stubs.o 00:04:21.625 CC lib/scsi/task.o 00:04:21.625 CC lib/ftl/ftl_sb.o 00:04:21.625 CC lib/nvmf/mdns_server.o 00:04:21.625 CC lib/ftl/ftl_l2p.o 00:04:21.625 CC lib/ftl/ftl_l2p_flat.o 00:04:21.625 CC lib/nvmf/vfio_user.o 00:04:21.625 CC lib/ftl/ftl_nv_cache.o 00:04:21.625 CC lib/nvmf/rdma.o 00:04:21.625 CC lib/nvmf/auth.o 00:04:21.625 CC lib/ftl/ftl_band.o 00:04:21.625 CC lib/ftl/ftl_band_ops.o 00:04:21.625 CC lib/ftl/ftl_writer.o 00:04:21.625 CC lib/ftl/ftl_rq.o 00:04:21.625 CC lib/ftl/ftl_reloc.o 00:04:21.625 CC lib/ftl/ftl_l2p_cache.o 00:04:21.625 CC lib/ftl/ftl_p2l.o 00:04:21.625 CC lib/ftl/ftl_p2l_log.o 00:04:21.625 CC lib/ftl/mngt/ftl_mngt.o 00:04:21.625 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:21.625 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:21.625 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:21.625 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:21.625 SO libspdk_lvol.so.10.0 00:04:21.625 SYMLINK libspdk_lvol.so 00:04:21.625 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:21.883 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:21.883 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:21.883 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:21.883 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:21.883 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:21.883 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:21.883 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:21.883 CC lib/ftl/utils/ftl_conf.o 00:04:21.883 CC lib/ftl/utils/ftl_md.o 00:04:21.883 CC lib/ftl/utils/ftl_mempool.o 00:04:21.883 CC lib/ftl/utils/ftl_bitmap.o 00:04:21.883 CC lib/ftl/utils/ftl_property.o 00:04:22.144 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:22.144 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:22.144 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:22.144 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:22.144 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:22.144 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:22.144 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:22.144 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:22.144 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:22.144 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:22.144 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:22.144 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:22.144 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:22.144 CC lib/ftl/base/ftl_base_dev.o 00:04:22.405 CC lib/ftl/base/ftl_base_bdev.o 00:04:22.405 CC lib/ftl/ftl_trace.o 00:04:22.405 LIB libspdk_nbd.a 00:04:22.405 SO libspdk_nbd.so.7.0 00:04:22.405 SYMLINK libspdk_nbd.so 00:04:22.405 LIB libspdk_scsi.a 00:04:22.664 SO libspdk_scsi.so.9.0 00:04:22.664 LIB libspdk_ublk.a 00:04:22.664 SO libspdk_ublk.so.3.0 00:04:22.664 SYMLINK libspdk_scsi.so 00:04:22.664 SYMLINK libspdk_ublk.so 00:04:22.922 CC lib/iscsi/conn.o 00:04:22.922 CC lib/vhost/vhost.o 00:04:22.922 CC lib/vhost/vhost_rpc.o 00:04:22.922 CC lib/iscsi/init_grp.o 00:04:22.922 CC lib/iscsi/iscsi.o 00:04:22.922 CC lib/vhost/vhost_scsi.o 00:04:22.922 CC lib/iscsi/param.o 00:04:22.922 CC lib/vhost/vhost_blk.o 00:04:22.922 CC lib/iscsi/portal_grp.o 00:04:22.922 CC lib/vhost/rte_vhost_user.o 00:04:22.922 CC lib/iscsi/tgt_node.o 00:04:22.922 CC lib/iscsi/iscsi_subsystem.o 00:04:22.922 CC lib/iscsi/iscsi_rpc.o 00:04:22.922 CC lib/iscsi/task.o 00:04:22.922 LIB libspdk_ftl.a 00:04:23.181 SO libspdk_ftl.so.9.0 00:04:23.439 SYMLINK libspdk_ftl.so 00:04:24.005 LIB libspdk_vhost.a 00:04:24.005 SO libspdk_vhost.so.8.0 00:04:24.264 LIB libspdk_nvmf.a 00:04:24.264 SYMLINK libspdk_vhost.so 00:04:24.264 SO libspdk_nvmf.so.20.0 00:04:24.264 LIB libspdk_iscsi.a 00:04:24.264 SO libspdk_iscsi.so.8.0 00:04:24.522 SYMLINK libspdk_nvmf.so 00:04:24.522 SYMLINK libspdk_iscsi.so 00:04:24.781 CC module/env_dpdk/env_dpdk_rpc.o 00:04:24.781 CC module/vfu_device/vfu_virtio.o 00:04:24.781 CC module/vfu_device/vfu_virtio_blk.o 00:04:24.781 CC module/vfu_device/vfu_virtio_scsi.o 00:04:24.781 CC module/vfu_device/vfu_virtio_rpc.o 00:04:24.781 CC module/vfu_device/vfu_virtio_fs.o 00:04:24.781 CC module/keyring/file/keyring.o 00:04:24.781 CC module/accel/error/accel_error.o 00:04:24.781 CC module/blob/bdev/blob_bdev.o 00:04:24.781 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:24.781 CC module/sock/posix/posix.o 00:04:24.781 CC module/scheduler/gscheduler/gscheduler.o 00:04:24.781 CC module/keyring/file/keyring_rpc.o 00:04:24.781 CC module/accel/error/accel_error_rpc.o 00:04:24.781 CC module/fsdev/aio/fsdev_aio.o 00:04:24.781 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:24.781 CC module/accel/iaa/accel_iaa.o 00:04:24.781 CC module/fsdev/aio/linux_aio_mgr.o 00:04:24.781 CC module/accel/dsa/accel_dsa.o 00:04:24.781 CC module/accel/iaa/accel_iaa_rpc.o 00:04:24.781 CC module/keyring/linux/keyring.o 00:04:24.781 CC module/accel/dsa/accel_dsa_rpc.o 00:04:24.782 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:24.782 CC module/accel/ioat/accel_ioat.o 00:04:24.782 CC module/accel/ioat/accel_ioat_rpc.o 00:04:24.782 CC module/keyring/linux/keyring_rpc.o 00:04:24.782 LIB libspdk_env_dpdk_rpc.a 00:04:25.039 SO libspdk_env_dpdk_rpc.so.6.0 00:04:25.039 SYMLINK libspdk_env_dpdk_rpc.so 00:04:25.039 LIB libspdk_keyring_linux.a 00:04:25.039 LIB libspdk_scheduler_gscheduler.a 00:04:25.039 LIB libspdk_scheduler_dpdk_governor.a 00:04:25.039 SO libspdk_keyring_linux.so.1.0 00:04:25.039 SO libspdk_scheduler_gscheduler.so.4.0 00:04:25.039 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:25.039 LIB libspdk_scheduler_dynamic.a 00:04:25.039 LIB libspdk_accel_error.a 00:04:25.039 LIB libspdk_accel_iaa.a 00:04:25.039 LIB libspdk_keyring_file.a 00:04:25.039 SYMLINK libspdk_keyring_linux.so 00:04:25.039 SYMLINK libspdk_scheduler_gscheduler.so 00:04:25.039 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:25.039 SO libspdk_scheduler_dynamic.so.4.0 00:04:25.039 SO libspdk_accel_error.so.2.0 00:04:25.039 SO libspdk_accel_iaa.so.3.0 00:04:25.039 SO libspdk_keyring_file.so.2.0 00:04:25.039 LIB libspdk_accel_ioat.a 00:04:25.039 SYMLINK libspdk_scheduler_dynamic.so 00:04:25.039 LIB libspdk_blob_bdev.a 00:04:25.039 SYMLINK libspdk_accel_error.so 00:04:25.039 SYMLINK libspdk_keyring_file.so 00:04:25.039 SO libspdk_accel_ioat.so.6.0 00:04:25.039 SYMLINK libspdk_accel_iaa.so 00:04:25.039 LIB libspdk_accel_dsa.a 00:04:25.039 SO libspdk_blob_bdev.so.11.0 00:04:25.297 SO libspdk_accel_dsa.so.5.0 00:04:25.297 SYMLINK libspdk_accel_ioat.so 00:04:25.297 SYMLINK libspdk_blob_bdev.so 00:04:25.297 SYMLINK libspdk_accel_dsa.so 00:04:25.560 LIB libspdk_vfu_device.a 00:04:25.560 SO libspdk_vfu_device.so.3.0 00:04:25.560 CC module/bdev/gpt/gpt.o 00:04:25.560 CC module/bdev/delay/vbdev_delay.o 00:04:25.560 CC module/bdev/malloc/bdev_malloc.o 00:04:25.560 CC module/bdev/gpt/vbdev_gpt.o 00:04:25.560 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:25.560 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:25.560 CC module/bdev/null/bdev_null.o 00:04:25.560 CC module/bdev/error/vbdev_error.o 00:04:25.560 CC module/bdev/null/bdev_null_rpc.o 00:04:25.560 CC module/bdev/error/vbdev_error_rpc.o 00:04:25.560 CC module/blobfs/bdev/blobfs_bdev.o 00:04:25.560 CC module/bdev/aio/bdev_aio.o 00:04:25.560 CC module/bdev/lvol/vbdev_lvol.o 00:04:25.560 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:25.560 CC module/bdev/aio/bdev_aio_rpc.o 00:04:25.560 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:25.560 CC module/bdev/split/vbdev_split.o 00:04:25.560 CC module/bdev/iscsi/bdev_iscsi.o 00:04:25.560 CC module/bdev/passthru/vbdev_passthru.o 00:04:25.560 CC module/bdev/nvme/bdev_nvme.o 00:04:25.560 CC module/bdev/raid/bdev_raid.o 00:04:25.560 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:25.560 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:25.560 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:25.560 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:25.560 CC module/bdev/split/vbdev_split_rpc.o 00:04:25.560 CC module/bdev/raid/bdev_raid_rpc.o 00:04:25.560 CC module/bdev/raid/bdev_raid_sb.o 00:04:25.560 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:25.560 CC module/bdev/raid/raid0.o 00:04:25.560 CC module/bdev/nvme/nvme_rpc.o 00:04:25.560 CC module/bdev/nvme/bdev_mdns_client.o 00:04:25.560 CC module/bdev/raid/raid1.o 00:04:25.560 CC module/bdev/nvme/vbdev_opal.o 00:04:25.560 CC module/bdev/raid/concat.o 00:04:25.560 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:25.560 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:25.560 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:25.560 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:25.560 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:25.560 CC module/bdev/ftl/bdev_ftl.o 00:04:25.560 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:25.560 SYMLINK libspdk_vfu_device.so 00:04:25.560 LIB libspdk_fsdev_aio.a 00:04:25.818 SO libspdk_fsdev_aio.so.1.0 00:04:25.818 LIB libspdk_sock_posix.a 00:04:25.818 SO libspdk_sock_posix.so.6.0 00:04:25.818 SYMLINK libspdk_fsdev_aio.so 00:04:25.818 LIB libspdk_blobfs_bdev.a 00:04:25.818 SO libspdk_blobfs_bdev.so.6.0 00:04:25.818 SYMLINK libspdk_sock_posix.so 00:04:26.077 LIB libspdk_bdev_split.a 00:04:26.077 SYMLINK libspdk_blobfs_bdev.so 00:04:26.077 LIB libspdk_bdev_gpt.a 00:04:26.077 SO libspdk_bdev_split.so.6.0 00:04:26.077 SO libspdk_bdev_gpt.so.6.0 00:04:26.077 LIB libspdk_bdev_error.a 00:04:26.077 LIB libspdk_bdev_null.a 00:04:26.077 SO libspdk_bdev_error.so.6.0 00:04:26.077 LIB libspdk_bdev_passthru.a 00:04:26.077 SO libspdk_bdev_null.so.6.0 00:04:26.077 SYMLINK libspdk_bdev_split.so 00:04:26.077 SO libspdk_bdev_passthru.so.6.0 00:04:26.077 SYMLINK libspdk_bdev_gpt.so 00:04:26.077 LIB libspdk_bdev_ftl.a 00:04:26.077 LIB libspdk_bdev_aio.a 00:04:26.077 SYMLINK libspdk_bdev_error.so 00:04:26.077 SO libspdk_bdev_ftl.so.6.0 00:04:26.077 SO libspdk_bdev_aio.so.6.0 00:04:26.077 SYMLINK libspdk_bdev_null.so 00:04:26.077 LIB libspdk_bdev_zone_block.a 00:04:26.077 LIB libspdk_bdev_iscsi.a 00:04:26.077 SYMLINK libspdk_bdev_passthru.so 00:04:26.077 SO libspdk_bdev_zone_block.so.6.0 00:04:26.077 SO libspdk_bdev_iscsi.so.6.0 00:04:26.077 LIB libspdk_bdev_malloc.a 00:04:26.077 SYMLINK libspdk_bdev_ftl.so 00:04:26.077 LIB libspdk_bdev_delay.a 00:04:26.077 SYMLINK libspdk_bdev_aio.so 00:04:26.077 SO libspdk_bdev_malloc.so.6.0 00:04:26.077 SO libspdk_bdev_delay.so.6.0 00:04:26.077 SYMLINK libspdk_bdev_zone_block.so 00:04:26.077 SYMLINK libspdk_bdev_iscsi.so 00:04:26.335 SYMLINK libspdk_bdev_malloc.so 00:04:26.335 SYMLINK libspdk_bdev_delay.so 00:04:26.335 LIB libspdk_bdev_virtio.a 00:04:26.335 SO libspdk_bdev_virtio.so.6.0 00:04:26.335 LIB libspdk_bdev_lvol.a 00:04:26.335 SO libspdk_bdev_lvol.so.6.0 00:04:26.335 SYMLINK libspdk_bdev_virtio.so 00:04:26.335 SYMLINK libspdk_bdev_lvol.so 00:04:26.593 LIB libspdk_bdev_raid.a 00:04:26.850 SO libspdk_bdev_raid.so.6.0 00:04:26.850 SYMLINK libspdk_bdev_raid.so 00:04:28.224 LIB libspdk_bdev_nvme.a 00:04:28.224 SO libspdk_bdev_nvme.so.7.1 00:04:28.482 SYMLINK libspdk_bdev_nvme.so 00:04:28.741 CC module/event/subsystems/vmd/vmd.o 00:04:28.741 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:28.741 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:28.741 CC module/event/subsystems/iobuf/iobuf.o 00:04:28.741 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:28.741 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:28.741 CC module/event/subsystems/fsdev/fsdev.o 00:04:28.741 CC module/event/subsystems/keyring/keyring.o 00:04:28.741 CC module/event/subsystems/sock/sock.o 00:04:28.741 CC module/event/subsystems/scheduler/scheduler.o 00:04:28.999 LIB libspdk_event_keyring.a 00:04:28.999 LIB libspdk_event_vhost_blk.a 00:04:28.999 LIB libspdk_event_fsdev.a 00:04:28.999 LIB libspdk_event_vfu_tgt.a 00:04:28.999 LIB libspdk_event_vmd.a 00:04:28.999 LIB libspdk_event_scheduler.a 00:04:28.999 LIB libspdk_event_sock.a 00:04:28.999 SO libspdk_event_keyring.so.1.0 00:04:28.999 SO libspdk_event_vhost_blk.so.3.0 00:04:28.999 LIB libspdk_event_iobuf.a 00:04:28.999 SO libspdk_event_fsdev.so.1.0 00:04:28.999 SO libspdk_event_vfu_tgt.so.3.0 00:04:28.999 SO libspdk_event_scheduler.so.4.0 00:04:28.999 SO libspdk_event_vmd.so.6.0 00:04:28.999 SO libspdk_event_sock.so.5.0 00:04:28.999 SO libspdk_event_iobuf.so.3.0 00:04:28.999 SYMLINK libspdk_event_keyring.so 00:04:28.999 SYMLINK libspdk_event_vhost_blk.so 00:04:28.999 SYMLINK libspdk_event_fsdev.so 00:04:28.999 SYMLINK libspdk_event_vfu_tgt.so 00:04:28.999 SYMLINK libspdk_event_scheduler.so 00:04:28.999 SYMLINK libspdk_event_sock.so 00:04:28.999 SYMLINK libspdk_event_vmd.so 00:04:28.999 SYMLINK libspdk_event_iobuf.so 00:04:29.257 CC module/event/subsystems/accel/accel.o 00:04:29.257 LIB libspdk_event_accel.a 00:04:29.257 SO libspdk_event_accel.so.6.0 00:04:29.517 SYMLINK libspdk_event_accel.so 00:04:29.517 CC module/event/subsystems/bdev/bdev.o 00:04:29.775 LIB libspdk_event_bdev.a 00:04:29.775 SO libspdk_event_bdev.so.6.0 00:04:29.775 SYMLINK libspdk_event_bdev.so 00:04:30.034 CC module/event/subsystems/ublk/ublk.o 00:04:30.034 CC module/event/subsystems/nbd/nbd.o 00:04:30.034 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:30.034 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:30.034 CC module/event/subsystems/scsi/scsi.o 00:04:30.034 LIB libspdk_event_ublk.a 00:04:30.034 LIB libspdk_event_nbd.a 00:04:30.293 LIB libspdk_event_scsi.a 00:04:30.293 SO libspdk_event_nbd.so.6.0 00:04:30.293 SO libspdk_event_ublk.so.3.0 00:04:30.293 SO libspdk_event_scsi.so.6.0 00:04:30.293 SYMLINK libspdk_event_nbd.so 00:04:30.293 SYMLINK libspdk_event_ublk.so 00:04:30.293 SYMLINK libspdk_event_scsi.so 00:04:30.293 LIB libspdk_event_nvmf.a 00:04:30.293 SO libspdk_event_nvmf.so.6.0 00:04:30.293 SYMLINK libspdk_event_nvmf.so 00:04:30.293 CC module/event/subsystems/iscsi/iscsi.o 00:04:30.293 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:30.551 LIB libspdk_event_vhost_scsi.a 00:04:30.551 LIB libspdk_event_iscsi.a 00:04:30.551 SO libspdk_event_vhost_scsi.so.3.0 00:04:30.551 SO libspdk_event_iscsi.so.6.0 00:04:30.551 SYMLINK libspdk_event_vhost_scsi.so 00:04:30.551 SYMLINK libspdk_event_iscsi.so 00:04:30.809 SO libspdk.so.6.0 00:04:30.809 SYMLINK libspdk.so 00:04:31.072 CXX app/trace/trace.o 00:04:31.072 CC app/trace_record/trace_record.o 00:04:31.072 CC app/spdk_nvme_perf/perf.o 00:04:31.072 CC app/spdk_nvme_identify/identify.o 00:04:31.072 CC app/spdk_nvme_discover/discovery_aer.o 00:04:31.072 CC app/spdk_top/spdk_top.o 00:04:31.072 CC app/spdk_lspci/spdk_lspci.o 00:04:31.072 TEST_HEADER include/spdk/accel.h 00:04:31.072 CC test/rpc_client/rpc_client_test.o 00:04:31.072 TEST_HEADER include/spdk/accel_module.h 00:04:31.072 TEST_HEADER include/spdk/assert.h 00:04:31.072 TEST_HEADER include/spdk/barrier.h 00:04:31.072 TEST_HEADER include/spdk/base64.h 00:04:31.072 TEST_HEADER include/spdk/bdev.h 00:04:31.072 TEST_HEADER include/spdk/bdev_module.h 00:04:31.072 TEST_HEADER include/spdk/bdev_zone.h 00:04:31.073 TEST_HEADER include/spdk/bit_array.h 00:04:31.073 TEST_HEADER include/spdk/bit_pool.h 00:04:31.073 TEST_HEADER include/spdk/blob_bdev.h 00:04:31.073 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:31.073 TEST_HEADER include/spdk/blobfs.h 00:04:31.073 TEST_HEADER include/spdk/blob.h 00:04:31.073 TEST_HEADER include/spdk/conf.h 00:04:31.073 TEST_HEADER include/spdk/config.h 00:04:31.073 TEST_HEADER include/spdk/cpuset.h 00:04:31.073 TEST_HEADER include/spdk/crc16.h 00:04:31.073 TEST_HEADER include/spdk/crc32.h 00:04:31.073 TEST_HEADER include/spdk/crc64.h 00:04:31.073 TEST_HEADER include/spdk/dif.h 00:04:31.073 TEST_HEADER include/spdk/dma.h 00:04:31.073 TEST_HEADER include/spdk/endian.h 00:04:31.073 TEST_HEADER include/spdk/env_dpdk.h 00:04:31.073 TEST_HEADER include/spdk/env.h 00:04:31.073 TEST_HEADER include/spdk/event.h 00:04:31.073 TEST_HEADER include/spdk/fd_group.h 00:04:31.073 TEST_HEADER include/spdk/fd.h 00:04:31.073 TEST_HEADER include/spdk/file.h 00:04:31.073 TEST_HEADER include/spdk/fsdev.h 00:04:31.073 TEST_HEADER include/spdk/fsdev_module.h 00:04:31.073 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:31.073 TEST_HEADER include/spdk/ftl.h 00:04:31.073 TEST_HEADER include/spdk/gpt_spec.h 00:04:31.073 TEST_HEADER include/spdk/hexlify.h 00:04:31.073 TEST_HEADER include/spdk/histogram_data.h 00:04:31.073 TEST_HEADER include/spdk/idxd.h 00:04:31.073 TEST_HEADER include/spdk/idxd_spec.h 00:04:31.073 TEST_HEADER include/spdk/init.h 00:04:31.073 TEST_HEADER include/spdk/ioat.h 00:04:31.073 TEST_HEADER include/spdk/ioat_spec.h 00:04:31.073 TEST_HEADER include/spdk/iscsi_spec.h 00:04:31.073 TEST_HEADER include/spdk/json.h 00:04:31.073 TEST_HEADER include/spdk/jsonrpc.h 00:04:31.073 TEST_HEADER include/spdk/keyring_module.h 00:04:31.073 TEST_HEADER include/spdk/keyring.h 00:04:31.073 TEST_HEADER include/spdk/likely.h 00:04:31.073 TEST_HEADER include/spdk/log.h 00:04:31.073 TEST_HEADER include/spdk/lvol.h 00:04:31.073 TEST_HEADER include/spdk/md5.h 00:04:31.073 TEST_HEADER include/spdk/memory.h 00:04:31.073 TEST_HEADER include/spdk/mmio.h 00:04:31.073 TEST_HEADER include/spdk/nbd.h 00:04:31.073 TEST_HEADER include/spdk/net.h 00:04:31.073 TEST_HEADER include/spdk/notify.h 00:04:31.073 TEST_HEADER include/spdk/nvme.h 00:04:31.073 TEST_HEADER include/spdk/nvme_intel.h 00:04:31.073 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:31.073 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:31.073 TEST_HEADER include/spdk/nvme_spec.h 00:04:31.073 TEST_HEADER include/spdk/nvme_zns.h 00:04:31.073 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:31.073 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:31.073 TEST_HEADER include/spdk/nvmf.h 00:04:31.073 TEST_HEADER include/spdk/nvmf_transport.h 00:04:31.073 TEST_HEADER include/spdk/nvmf_spec.h 00:04:31.073 TEST_HEADER include/spdk/opal.h 00:04:31.073 TEST_HEADER include/spdk/opal_spec.h 00:04:31.073 TEST_HEADER include/spdk/pci_ids.h 00:04:31.073 TEST_HEADER include/spdk/pipe.h 00:04:31.073 TEST_HEADER include/spdk/queue.h 00:04:31.073 TEST_HEADER include/spdk/rpc.h 00:04:31.073 TEST_HEADER include/spdk/reduce.h 00:04:31.073 TEST_HEADER include/spdk/scheduler.h 00:04:31.073 TEST_HEADER include/spdk/scsi.h 00:04:31.073 TEST_HEADER include/spdk/scsi_spec.h 00:04:31.073 TEST_HEADER include/spdk/sock.h 00:04:31.073 TEST_HEADER include/spdk/string.h 00:04:31.073 TEST_HEADER include/spdk/stdinc.h 00:04:31.073 TEST_HEADER include/spdk/thread.h 00:04:31.073 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:31.073 TEST_HEADER include/spdk/trace_parser.h 00:04:31.073 TEST_HEADER include/spdk/trace.h 00:04:31.073 TEST_HEADER include/spdk/ublk.h 00:04:31.073 TEST_HEADER include/spdk/tree.h 00:04:31.073 TEST_HEADER include/spdk/util.h 00:04:31.073 TEST_HEADER include/spdk/uuid.h 00:04:31.073 TEST_HEADER include/spdk/version.h 00:04:31.073 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:31.073 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:31.073 TEST_HEADER include/spdk/vmd.h 00:04:31.073 TEST_HEADER include/spdk/vhost.h 00:04:31.073 TEST_HEADER include/spdk/xor.h 00:04:31.073 TEST_HEADER include/spdk/zipf.h 00:04:31.073 CXX test/cpp_headers/accel.o 00:04:31.073 CXX test/cpp_headers/accel_module.o 00:04:31.073 CXX test/cpp_headers/assert.o 00:04:31.073 CXX test/cpp_headers/barrier.o 00:04:31.073 CXX test/cpp_headers/base64.o 00:04:31.073 CXX test/cpp_headers/bdev.o 00:04:31.073 CXX test/cpp_headers/bdev_module.o 00:04:31.073 CXX test/cpp_headers/bdev_zone.o 00:04:31.073 CXX test/cpp_headers/bit_array.o 00:04:31.073 CXX test/cpp_headers/bit_pool.o 00:04:31.073 CXX test/cpp_headers/blob_bdev.o 00:04:31.073 CXX test/cpp_headers/blobfs_bdev.o 00:04:31.073 CC app/spdk_dd/spdk_dd.o 00:04:31.073 CXX test/cpp_headers/blobfs.o 00:04:31.073 CXX test/cpp_headers/blob.o 00:04:31.073 CXX test/cpp_headers/conf.o 00:04:31.073 CXX test/cpp_headers/config.o 00:04:31.073 CXX test/cpp_headers/cpuset.o 00:04:31.073 CC app/iscsi_tgt/iscsi_tgt.o 00:04:31.073 CXX test/cpp_headers/crc16.o 00:04:31.073 CC app/nvmf_tgt/nvmf_main.o 00:04:31.073 CC app/spdk_tgt/spdk_tgt.o 00:04:31.073 CXX test/cpp_headers/crc32.o 00:04:31.073 CC examples/util/zipf/zipf.o 00:04:31.073 CC examples/ioat/perf/perf.o 00:04:31.073 CC test/app/jsoncat/jsoncat.o 00:04:31.073 CC test/app/histogram_perf/histogram_perf.o 00:04:31.073 CC examples/ioat/verify/verify.o 00:04:31.073 CC app/fio/nvme/fio_plugin.o 00:04:31.073 CC test/app/stub/stub.o 00:04:31.073 CC test/thread/poller_perf/poller_perf.o 00:04:31.073 CC test/env/vtophys/vtophys.o 00:04:31.073 CC test/env/pci/pci_ut.o 00:04:31.073 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:31.073 CC test/env/memory/memory_ut.o 00:04:31.073 CC app/fio/bdev/fio_plugin.o 00:04:31.073 CC test/dma/test_dma/test_dma.o 00:04:31.073 CC test/app/bdev_svc/bdev_svc.o 00:04:31.338 LINK spdk_lspci 00:04:31.338 CC test/env/mem_callbacks/mem_callbacks.o 00:04:31.338 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:31.338 LINK rpc_client_test 00:04:31.338 LINK spdk_nvme_discover 00:04:31.339 LINK interrupt_tgt 00:04:31.339 LINK jsoncat 00:04:31.339 LINK histogram_perf 00:04:31.339 LINK zipf 00:04:31.339 LINK poller_perf 00:04:31.600 LINK vtophys 00:04:31.600 CXX test/cpp_headers/crc64.o 00:04:31.600 CXX test/cpp_headers/dif.o 00:04:31.600 CXX test/cpp_headers/dma.o 00:04:31.600 LINK nvmf_tgt 00:04:31.600 CXX test/cpp_headers/endian.o 00:04:31.600 CXX test/cpp_headers/env_dpdk.o 00:04:31.600 CXX test/cpp_headers/env.o 00:04:31.600 LINK env_dpdk_post_init 00:04:31.600 CXX test/cpp_headers/event.o 00:04:31.600 CXX test/cpp_headers/fd_group.o 00:04:31.600 LINK spdk_trace_record 00:04:31.600 CXX test/cpp_headers/fd.o 00:04:31.600 CXX test/cpp_headers/file.o 00:04:31.600 LINK iscsi_tgt 00:04:31.600 LINK stub 00:04:31.600 CXX test/cpp_headers/fsdev.o 00:04:31.600 CXX test/cpp_headers/fsdev_module.o 00:04:31.600 CXX test/cpp_headers/ftl.o 00:04:31.600 CXX test/cpp_headers/fuse_dispatcher.o 00:04:31.601 CXX test/cpp_headers/gpt_spec.o 00:04:31.601 LINK spdk_tgt 00:04:31.601 CXX test/cpp_headers/hexlify.o 00:04:31.601 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:31.601 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:31.601 LINK ioat_perf 00:04:31.601 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:31.601 LINK bdev_svc 00:04:31.601 LINK verify 00:04:31.601 CXX test/cpp_headers/histogram_data.o 00:04:31.601 CXX test/cpp_headers/idxd.o 00:04:31.864 CXX test/cpp_headers/idxd_spec.o 00:04:31.864 CXX test/cpp_headers/init.o 00:04:31.864 LINK mem_callbacks 00:04:31.864 CXX test/cpp_headers/ioat.o 00:04:31.864 CXX test/cpp_headers/ioat_spec.o 00:04:31.864 LINK spdk_dd 00:04:31.864 CXX test/cpp_headers/iscsi_spec.o 00:04:31.864 LINK spdk_trace 00:04:31.864 CXX test/cpp_headers/json.o 00:04:31.864 CXX test/cpp_headers/jsonrpc.o 00:04:31.864 CXX test/cpp_headers/keyring.o 00:04:31.864 CXX test/cpp_headers/keyring_module.o 00:04:31.864 CXX test/cpp_headers/likely.o 00:04:31.864 CXX test/cpp_headers/log.o 00:04:31.864 CXX test/cpp_headers/lvol.o 00:04:31.864 LINK pci_ut 00:04:31.864 CXX test/cpp_headers/md5.o 00:04:31.864 CXX test/cpp_headers/memory.o 00:04:31.864 CXX test/cpp_headers/mmio.o 00:04:31.864 CXX test/cpp_headers/nbd.o 00:04:31.864 CXX test/cpp_headers/net.o 00:04:31.864 CXX test/cpp_headers/notify.o 00:04:31.864 CXX test/cpp_headers/nvme.o 00:04:31.865 CXX test/cpp_headers/nvme_intel.o 00:04:31.865 CXX test/cpp_headers/nvme_ocssd.o 00:04:31.865 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:32.128 CXX test/cpp_headers/nvme_spec.o 00:04:32.128 CXX test/cpp_headers/nvme_zns.o 00:04:32.128 CXX test/cpp_headers/nvmf_cmd.o 00:04:32.128 CC examples/sock/hello_world/hello_sock.o 00:04:32.128 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:32.128 CXX test/cpp_headers/nvmf.o 00:04:32.128 CXX test/cpp_headers/nvmf_spec.o 00:04:32.128 CXX test/cpp_headers/nvmf_transport.o 00:04:32.128 CC examples/vmd/lsvmd/lsvmd.o 00:04:32.128 CXX test/cpp_headers/opal.o 00:04:32.128 CC examples/vmd/led/led.o 00:04:32.128 CC examples/thread/thread/thread_ex.o 00:04:32.128 CXX test/cpp_headers/opal_spec.o 00:04:32.128 CC test/event/reactor/reactor.o 00:04:32.128 CC test/event/reactor_perf/reactor_perf.o 00:04:32.128 LINK nvme_fuzz 00:04:32.128 CC test/event/event_perf/event_perf.o 00:04:32.128 CC examples/idxd/perf/perf.o 00:04:32.128 CXX test/cpp_headers/pci_ids.o 00:04:32.128 LINK spdk_bdev 00:04:32.128 CC test/event/app_repeat/app_repeat.o 00:04:32.128 LINK test_dma 00:04:32.392 LINK spdk_nvme 00:04:32.392 CXX test/cpp_headers/pipe.o 00:04:32.392 CC test/event/scheduler/scheduler.o 00:04:32.392 CXX test/cpp_headers/queue.o 00:04:32.392 CXX test/cpp_headers/reduce.o 00:04:32.392 CXX test/cpp_headers/rpc.o 00:04:32.392 CXX test/cpp_headers/scheduler.o 00:04:32.392 CXX test/cpp_headers/scsi.o 00:04:32.392 CXX test/cpp_headers/scsi_spec.o 00:04:32.392 CXX test/cpp_headers/sock.o 00:04:32.392 CXX test/cpp_headers/stdinc.o 00:04:32.392 CXX test/cpp_headers/string.o 00:04:32.392 CXX test/cpp_headers/thread.o 00:04:32.392 CXX test/cpp_headers/trace.o 00:04:32.392 CXX test/cpp_headers/trace_parser.o 00:04:32.392 LINK vhost_fuzz 00:04:32.392 CXX test/cpp_headers/tree.o 00:04:32.392 CXX test/cpp_headers/ublk.o 00:04:32.392 CXX test/cpp_headers/util.o 00:04:32.392 CC app/vhost/vhost.o 00:04:32.392 CXX test/cpp_headers/uuid.o 00:04:32.392 CXX test/cpp_headers/version.o 00:04:32.392 CXX test/cpp_headers/vfio_user_pci.o 00:04:32.392 CXX test/cpp_headers/vfio_user_spec.o 00:04:32.393 CXX test/cpp_headers/vhost.o 00:04:32.393 LINK lsvmd 00:04:32.393 CXX test/cpp_headers/vmd.o 00:04:32.393 CXX test/cpp_headers/xor.o 00:04:32.393 CXX test/cpp_headers/zipf.o 00:04:32.393 LINK led 00:04:32.652 LINK reactor_perf 00:04:32.652 LINK spdk_nvme_perf 00:04:32.652 LINK reactor 00:04:32.652 LINK event_perf 00:04:32.652 LINK app_repeat 00:04:32.652 LINK hello_sock 00:04:32.652 LINK spdk_nvme_identify 00:04:32.652 LINK spdk_top 00:04:32.652 LINK thread 00:04:32.652 LINK memory_ut 00:04:32.912 LINK scheduler 00:04:32.912 LINK vhost 00:04:32.912 LINK idxd_perf 00:04:32.912 CC test/nvme/reset/reset.o 00:04:32.912 CC test/nvme/compliance/nvme_compliance.o 00:04:32.912 CC test/nvme/aer/aer.o 00:04:32.912 CC test/nvme/boot_partition/boot_partition.o 00:04:32.912 CC test/nvme/connect_stress/connect_stress.o 00:04:32.912 CC test/nvme/sgl/sgl.o 00:04:32.912 CC test/nvme/reserve/reserve.o 00:04:32.912 CC test/nvme/overhead/overhead.o 00:04:32.912 CC test/nvme/fused_ordering/fused_ordering.o 00:04:32.912 CC test/nvme/e2edp/nvme_dp.o 00:04:32.912 CC test/nvme/simple_copy/simple_copy.o 00:04:32.912 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:32.912 CC test/nvme/fdp/fdp.o 00:04:32.912 CC test/nvme/err_injection/err_injection.o 00:04:32.912 CC test/nvme/startup/startup.o 00:04:32.912 CC test/nvme/cuse/cuse.o 00:04:32.912 CC test/accel/dif/dif.o 00:04:32.912 CC test/blobfs/mkfs/mkfs.o 00:04:32.912 CC test/lvol/esnap/esnap.o 00:04:33.172 CC examples/nvme/arbitration/arbitration.o 00:04:33.172 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:33.172 CC examples/nvme/abort/abort.o 00:04:33.172 CC examples/nvme/hello_world/hello_world.o 00:04:33.172 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:33.172 CC examples/nvme/reconnect/reconnect.o 00:04:33.172 CC examples/nvme/hotplug/hotplug.o 00:04:33.172 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:33.172 LINK boot_partition 00:04:33.172 LINK startup 00:04:33.172 LINK connect_stress 00:04:33.172 LINK err_injection 00:04:33.172 CC examples/accel/perf/accel_perf.o 00:04:33.172 LINK doorbell_aers 00:04:33.172 LINK simple_copy 00:04:33.172 LINK fused_ordering 00:04:33.172 CC examples/blob/cli/blobcli.o 00:04:33.172 LINK reserve 00:04:33.172 LINK sgl 00:04:33.172 LINK overhead 00:04:33.172 LINK nvme_dp 00:04:33.172 LINK reset 00:04:33.431 CC examples/blob/hello_world/hello_blob.o 00:04:33.431 LINK aer 00:04:33.431 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:33.431 LINK cmb_copy 00:04:33.431 LINK nvme_compliance 00:04:33.431 LINK pmr_persistence 00:04:33.431 LINK mkfs 00:04:33.431 LINK hotplug 00:04:33.431 LINK hello_world 00:04:33.431 LINK fdp 00:04:33.689 LINK abort 00:04:33.689 LINK arbitration 00:04:33.689 LINK reconnect 00:04:33.689 LINK hello_blob 00:04:33.689 LINK hello_fsdev 00:04:33.689 LINK dif 00:04:33.689 LINK accel_perf 00:04:33.947 LINK nvme_manage 00:04:33.947 LINK blobcli 00:04:33.947 LINK iscsi_fuzz 00:04:34.206 CC test/bdev/bdevio/bdevio.o 00:04:34.206 CC examples/bdev/hello_world/hello_bdev.o 00:04:34.206 CC examples/bdev/bdevperf/bdevperf.o 00:04:34.464 LINK hello_bdev 00:04:34.464 LINK bdevio 00:04:34.724 LINK cuse 00:04:34.983 LINK bdevperf 00:04:35.241 CC examples/nvmf/nvmf/nvmf.o 00:04:35.808 LINK nvmf 00:04:38.338 LINK esnap 00:04:38.338 00:04:38.338 real 1m6.513s 00:04:38.338 user 9m1.844s 00:04:38.338 sys 1m58.544s 00:04:38.338 23:29:12 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:38.338 23:29:12 make -- common/autotest_common.sh@10 -- $ set +x 00:04:38.338 ************************************ 00:04:38.338 END TEST make 00:04:38.338 ************************************ 00:04:38.338 23:29:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:38.338 23:29:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:38.338 23:29:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:38.338 23:29:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.338 23:29:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:38.338 23:29:12 -- pm/common@44 -- $ pid=4138759 00:04:38.338 23:29:12 -- pm/common@50 -- $ kill -TERM 4138759 00:04:38.338 23:29:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.338 23:29:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:38.338 23:29:12 -- pm/common@44 -- $ pid=4138761 00:04:38.338 23:29:12 -- pm/common@50 -- $ kill -TERM 4138761 00:04:38.338 23:29:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.338 23:29:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:38.338 23:29:12 -- pm/common@44 -- $ pid=4138763 00:04:38.338 23:29:12 -- pm/common@50 -- $ kill -TERM 4138763 00:04:38.338 23:29:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.338 23:29:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:38.338 23:29:12 -- pm/common@44 -- $ pid=4138794 00:04:38.338 23:29:12 -- pm/common@50 -- $ sudo -E kill -TERM 4138794 00:04:38.597 23:29:12 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:38.597 23:29:12 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:38.597 23:29:12 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:38.597 23:29:12 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:38.597 23:29:12 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:38.597 23:29:12 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:38.597 23:29:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.597 23:29:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.597 23:29:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.597 23:29:12 -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.597 23:29:12 -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.597 23:29:12 -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.597 23:29:12 -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.597 23:29:12 -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.597 23:29:12 -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.597 23:29:12 -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.597 23:29:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.597 23:29:12 -- scripts/common.sh@344 -- # case "$op" in 00:04:38.597 23:29:12 -- scripts/common.sh@345 -- # : 1 00:04:38.597 23:29:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.597 23:29:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.597 23:29:12 -- scripts/common.sh@365 -- # decimal 1 00:04:38.597 23:29:12 -- scripts/common.sh@353 -- # local d=1 00:04:38.597 23:29:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.597 23:29:12 -- scripts/common.sh@355 -- # echo 1 00:04:38.597 23:29:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.597 23:29:12 -- scripts/common.sh@366 -- # decimal 2 00:04:38.597 23:29:12 -- scripts/common.sh@353 -- # local d=2 00:04:38.597 23:29:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.597 23:29:12 -- scripts/common.sh@355 -- # echo 2 00:04:38.597 23:29:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.597 23:29:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.597 23:29:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.597 23:29:12 -- scripts/common.sh@368 -- # return 0 00:04:38.597 23:29:12 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.597 23:29:12 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:38.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.597 --rc genhtml_branch_coverage=1 00:04:38.597 --rc genhtml_function_coverage=1 00:04:38.597 --rc genhtml_legend=1 00:04:38.597 --rc geninfo_all_blocks=1 00:04:38.597 --rc geninfo_unexecuted_blocks=1 00:04:38.597 00:04:38.597 ' 00:04:38.597 23:29:12 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:38.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.597 --rc genhtml_branch_coverage=1 00:04:38.597 --rc genhtml_function_coverage=1 00:04:38.597 --rc genhtml_legend=1 00:04:38.597 --rc geninfo_all_blocks=1 00:04:38.597 --rc geninfo_unexecuted_blocks=1 00:04:38.597 00:04:38.597 ' 00:04:38.597 23:29:12 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:38.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.597 --rc genhtml_branch_coverage=1 00:04:38.597 --rc genhtml_function_coverage=1 00:04:38.597 --rc genhtml_legend=1 00:04:38.597 --rc geninfo_all_blocks=1 00:04:38.597 --rc geninfo_unexecuted_blocks=1 00:04:38.597 00:04:38.597 ' 00:04:38.598 23:29:12 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:38.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.598 --rc genhtml_branch_coverage=1 00:04:38.598 --rc genhtml_function_coverage=1 00:04:38.598 --rc genhtml_legend=1 00:04:38.598 --rc geninfo_all_blocks=1 00:04:38.598 --rc geninfo_unexecuted_blocks=1 00:04:38.598 00:04:38.598 ' 00:04:38.598 23:29:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:38.598 23:29:12 -- nvmf/common.sh@7 -- # uname -s 00:04:38.598 23:29:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.598 23:29:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.598 23:29:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.598 23:29:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.598 23:29:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.598 23:29:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.598 23:29:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.598 23:29:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.598 23:29:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.598 23:29:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.598 23:29:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:38.598 23:29:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:38.598 23:29:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.598 23:29:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.598 23:29:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:38.598 23:29:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.598 23:29:12 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:38.598 23:29:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:38.598 23:29:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.598 23:29:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.598 23:29:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.598 23:29:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.598 23:29:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.598 23:29:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.598 23:29:12 -- paths/export.sh@5 -- # export PATH 00:04:38.598 23:29:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.598 23:29:12 -- nvmf/common.sh@51 -- # : 0 00:04:38.598 23:29:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:38.598 23:29:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:38.598 23:29:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.598 23:29:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.598 23:29:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.598 23:29:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:38.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:38.598 23:29:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:38.598 23:29:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:38.598 23:29:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:38.598 23:29:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:38.598 23:29:12 -- spdk/autotest.sh@32 -- # uname -s 00:04:38.598 23:29:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:38.598 23:29:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:38.598 23:29:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:38.598 23:29:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:38.598 23:29:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:38.598 23:29:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:38.598 23:29:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:38.598 23:29:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:38.598 23:29:12 -- spdk/autotest.sh@48 -- # udevadm_pid=25166 00:04:38.598 23:29:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:38.598 23:29:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:38.598 23:29:12 -- pm/common@17 -- # local monitor 00:04:38.598 23:29:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.598 23:29:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.598 23:29:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.598 23:29:12 -- pm/common@21 -- # date +%s 00:04:38.598 23:29:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.598 23:29:12 -- pm/common@21 -- # date +%s 00:04:38.598 23:29:12 -- pm/common@25 -- # sleep 1 00:04:38.598 23:29:12 -- pm/common@21 -- # date +%s 00:04:38.598 23:29:12 -- pm/common@21 -- # date +%s 00:04:38.598 23:29:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732055352 00:04:38.598 23:29:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732055352 00:04:38.598 23:29:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732055352 00:04:38.598 23:29:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732055352 00:04:38.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732055352_collect-vmstat.pm.log 00:04:38.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732055352_collect-cpu-load.pm.log 00:04:38.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732055352_collect-cpu-temp.pm.log 00:04:38.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732055352_collect-bmc-pm.bmc.pm.log 00:04:39.969 23:29:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:39.969 23:29:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:39.969 23:29:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.969 23:29:13 -- common/autotest_common.sh@10 -- # set +x 00:04:39.969 23:29:13 -- spdk/autotest.sh@59 -- # create_test_list 00:04:39.969 23:29:13 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:39.969 23:29:13 -- common/autotest_common.sh@10 -- # set +x 00:04:39.969 23:29:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:39.969 23:29:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.969 23:29:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.969 23:29:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:39.969 23:29:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.969 23:29:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:39.969 23:29:13 -- common/autotest_common.sh@1457 -- # uname 00:04:39.969 23:29:13 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:39.969 23:29:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:39.969 23:29:13 -- common/autotest_common.sh@1477 -- # uname 00:04:39.969 23:29:13 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:39.969 23:29:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:39.969 23:29:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:39.969 lcov: LCOV version 1.15 00:04:39.969 23:29:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:12.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:12.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:17.373 23:29:51 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:17.373 23:29:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.373 23:29:51 -- common/autotest_common.sh@10 -- # set +x 00:05:17.373 23:29:51 -- spdk/autotest.sh@78 -- # rm -f 00:05:17.373 23:29:51 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:18.797 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:18.797 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:18.797 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:18.797 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:18.797 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:18.797 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:18.797 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:18.797 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:18.797 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:18.797 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:18.797 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:18.797 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:18.797 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:18.797 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:18.797 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:18.797 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:18.797 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:18.797 23:29:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:18.797 23:29:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:18.797 23:29:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:18.797 23:29:53 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:18.797 23:29:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:18.797 23:29:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:18.797 23:29:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:18.797 23:29:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:18.797 23:29:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:18.797 23:29:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:18.797 23:29:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:18.797 23:29:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:18.797 23:29:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:18.797 23:29:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:18.797 23:29:53 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:19.056 No valid GPT data, bailing 00:05:19.056 23:29:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:19.056 23:29:53 -- scripts/common.sh@394 -- # pt= 00:05:19.056 23:29:53 -- scripts/common.sh@395 -- # return 1 00:05:19.056 23:29:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:19.056 1+0 records in 00:05:19.056 1+0 records out 00:05:19.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00206586 s, 508 MB/s 00:05:19.056 23:29:53 -- spdk/autotest.sh@105 -- # sync 00:05:19.056 23:29:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:19.056 23:29:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:19.056 23:29:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:20.959 23:29:55 -- spdk/autotest.sh@111 -- # uname -s 00:05:20.959 23:29:55 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:20.959 23:29:55 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:20.959 23:29:55 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:22.333 Hugepages 00:05:22.333 node hugesize free / total 00:05:22.333 node0 1048576kB 0 / 0 00:05:22.333 node0 2048kB 0 / 0 00:05:22.333 node1 1048576kB 0 / 0 00:05:22.333 node1 2048kB 0 / 0 00:05:22.333 00:05:22.333 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.333 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:22.333 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:22.333 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:22.333 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:22.333 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:22.333 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:22.333 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:22.333 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:22.333 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:22.333 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:22.333 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:22.333 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:22.333 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:22.333 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:22.333 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:22.333 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:22.333 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:22.333 23:29:56 -- spdk/autotest.sh@117 -- # uname -s 00:05:22.333 23:29:56 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:22.333 23:29:56 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:22.333 23:29:56 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:23.269 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:23.269 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:23.269 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:23.527 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:23.527 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:23.527 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:23.527 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:23.527 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:23.527 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:23.527 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:23.527 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:23.527 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:23.527 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:23.527 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:23.527 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:23.527 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:24.462 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:24.462 23:29:58 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:25.837 23:29:59 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:25.837 23:29:59 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:25.837 23:29:59 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.837 23:29:59 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:25.837 23:29:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:25.837 23:29:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:25.837 23:29:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.837 23:29:59 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:25.837 23:29:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:25.837 23:29:59 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:25.837 23:29:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:25.837 23:29:59 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:26.772 Waiting for block devices as requested 00:05:26.772 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:27.030 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:27.030 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:27.030 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:27.288 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:27.288 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:27.288 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:27.288 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:27.546 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:27.546 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:27.546 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:27.546 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:27.805 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:27.805 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:27.805 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:27.805 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:28.064 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:28.064 23:30:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:28.064 23:30:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:28.064 23:30:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:28.064 23:30:02 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:28.064 23:30:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:28.064 23:30:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:28.064 23:30:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:28.064 23:30:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:28.064 23:30:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:28.064 23:30:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:28.064 23:30:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:28.064 23:30:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:28.064 23:30:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:28.064 23:30:02 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:28.064 23:30:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:28.064 23:30:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:28.064 23:30:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:28.064 23:30:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:28.064 23:30:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:28.064 23:30:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:28.064 23:30:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:28.064 23:30:02 -- common/autotest_common.sh@1543 -- # continue 00:05:28.064 23:30:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:28.064 23:30:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:28.064 23:30:02 -- common/autotest_common.sh@10 -- # set +x 00:05:28.064 23:30:02 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:28.064 23:30:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.064 23:30:02 -- common/autotest_common.sh@10 -- # set +x 00:05:28.064 23:30:02 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:29.444 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:29.444 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:29.444 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:29.444 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:29.444 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:29.444 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:29.444 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:29.444 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:29.444 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:29.444 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:29.444 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:29.444 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:29.444 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:29.444 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:29.444 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:29.444 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:30.380 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:30.380 23:30:04 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:30.380 23:30:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.380 23:30:04 -- common/autotest_common.sh@10 -- # set +x 00:05:30.380 23:30:04 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:30.380 23:30:04 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:30.380 23:30:04 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:30.638 23:30:04 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:30.638 23:30:04 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:30.638 23:30:04 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:30.638 23:30:04 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:30.638 23:30:04 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:30.638 23:30:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:30.638 23:30:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:30.638 23:30:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:30.638 23:30:04 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:30.638 23:30:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:30.638 23:30:04 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:30.638 23:30:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:30.638 23:30:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:30.638 23:30:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:30.638 23:30:04 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:30.638 23:30:04 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:30.638 23:30:04 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:30.638 23:30:04 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:30.638 23:30:04 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:05:30.638 23:30:04 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:05:30.638 23:30:04 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=36054 00:05:30.638 23:30:04 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.638 23:30:04 -- common/autotest_common.sh@1585 -- # waitforlisten 36054 00:05:30.638 23:30:04 -- common/autotest_common.sh@835 -- # '[' -z 36054 ']' 00:05:30.638 23:30:04 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.638 23:30:04 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.638 23:30:04 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.638 23:30:04 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.638 23:30:04 -- common/autotest_common.sh@10 -- # set +x 00:05:30.638 [2024-11-19 23:30:04.807738] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:05:30.638 [2024-11-19 23:30:04.807841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid36054 ] 00:05:30.638 [2024-11-19 23:30:04.880326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.638 [2024-11-19 23:30:04.929561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.895 23:30:05 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.895 23:30:05 -- common/autotest_common.sh@868 -- # return 0 00:05:30.896 23:30:05 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:30.896 23:30:05 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:30.896 23:30:05 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:34.178 nvme0n1 00:05:34.178 23:30:08 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:34.438 [2024-11-19 23:30:08.548867] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:34.438 [2024-11-19 23:30:08.548920] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:34.438 request: 00:05:34.438 { 00:05:34.438 "nvme_ctrlr_name": "nvme0", 00:05:34.438 "password": "test", 00:05:34.438 "method": "bdev_nvme_opal_revert", 00:05:34.438 "req_id": 1 00:05:34.438 } 00:05:34.438 Got JSON-RPC error response 00:05:34.438 response: 00:05:34.438 { 00:05:34.438 "code": -32603, 00:05:34.438 "message": "Internal error" 00:05:34.438 } 00:05:34.438 23:30:08 -- common/autotest_common.sh@1591 -- # true 00:05:34.438 23:30:08 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:34.438 23:30:08 -- common/autotest_common.sh@1595 -- # killprocess 36054 00:05:34.438 23:30:08 -- common/autotest_common.sh@954 -- # '[' -z 36054 ']' 00:05:34.438 23:30:08 -- common/autotest_common.sh@958 -- # kill -0 36054 00:05:34.438 23:30:08 -- common/autotest_common.sh@959 -- # uname 00:05:34.438 23:30:08 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.438 23:30:08 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 36054 00:05:34.438 23:30:08 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.438 23:30:08 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.438 23:30:08 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 36054' 00:05:34.438 killing process with pid 36054 00:05:34.438 23:30:08 -- common/autotest_common.sh@973 -- # kill 36054 00:05:34.438 23:30:08 -- common/autotest_common.sh@978 -- # wait 36054 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.438 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:34.439 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.341 23:30:10 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:36.341 23:30:10 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:36.341 23:30:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:36.341 23:30:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:36.341 23:30:10 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:36.341 23:30:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.341 23:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:36.341 23:30:10 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:36.341 23:30:10 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:36.341 23:30:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.341 23:30:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.341 23:30:10 -- common/autotest_common.sh@10 -- # set +x 00:05:36.341 ************************************ 00:05:36.341 START TEST env 00:05:36.341 ************************************ 00:05:36.341 23:30:10 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:36.341 * Looking for test storage... 00:05:36.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:36.341 23:30:10 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.341 23:30:10 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.341 23:30:10 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.341 23:30:10 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.341 23:30:10 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.341 23:30:10 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.341 23:30:10 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.341 23:30:10 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.341 23:30:10 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.341 23:30:10 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.341 23:30:10 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.341 23:30:10 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.341 23:30:10 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.341 23:30:10 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.341 23:30:10 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.341 23:30:10 env -- scripts/common.sh@344 -- # case "$op" in 00:05:36.341 23:30:10 env -- scripts/common.sh@345 -- # : 1 00:05:36.341 23:30:10 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.341 23:30:10 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.341 23:30:10 env -- scripts/common.sh@365 -- # decimal 1 00:05:36.341 23:30:10 env -- scripts/common.sh@353 -- # local d=1 00:05:36.341 23:30:10 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.341 23:30:10 env -- scripts/common.sh@355 -- # echo 1 00:05:36.341 23:30:10 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.341 23:30:10 env -- scripts/common.sh@366 -- # decimal 2 00:05:36.341 23:30:10 env -- scripts/common.sh@353 -- # local d=2 00:05:36.341 23:30:10 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.341 23:30:10 env -- scripts/common.sh@355 -- # echo 2 00:05:36.341 23:30:10 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.341 23:30:10 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.341 23:30:10 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.341 23:30:10 env -- scripts/common.sh@368 -- # return 0 00:05:36.341 23:30:10 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.341 23:30:10 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.341 --rc genhtml_branch_coverage=1 00:05:36.341 --rc genhtml_function_coverage=1 00:05:36.341 --rc genhtml_legend=1 00:05:36.341 --rc geninfo_all_blocks=1 00:05:36.341 --rc geninfo_unexecuted_blocks=1 00:05:36.341 00:05:36.341 ' 00:05:36.341 23:30:10 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.341 --rc genhtml_branch_coverage=1 00:05:36.341 --rc genhtml_function_coverage=1 00:05:36.341 --rc genhtml_legend=1 00:05:36.341 --rc geninfo_all_blocks=1 00:05:36.341 --rc geninfo_unexecuted_blocks=1 00:05:36.341 00:05:36.341 ' 00:05:36.341 23:30:10 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.341 --rc genhtml_branch_coverage=1 00:05:36.341 --rc genhtml_function_coverage=1 00:05:36.341 --rc genhtml_legend=1 00:05:36.341 --rc geninfo_all_blocks=1 00:05:36.341 --rc geninfo_unexecuted_blocks=1 00:05:36.341 00:05:36.341 ' 00:05:36.341 23:30:10 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.341 --rc genhtml_branch_coverage=1 00:05:36.341 --rc genhtml_function_coverage=1 00:05:36.341 --rc genhtml_legend=1 00:05:36.341 --rc geninfo_all_blocks=1 00:05:36.341 --rc geninfo_unexecuted_blocks=1 00:05:36.341 00:05:36.341 ' 00:05:36.341 23:30:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:36.341 23:30:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.341 23:30:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.341 23:30:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.341 ************************************ 00:05:36.341 START TEST env_memory 00:05:36.341 ************************************ 00:05:36.341 23:30:10 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:36.341 00:05:36.341 00:05:36.341 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.341 http://cunit.sourceforge.net/ 00:05:36.341 00:05:36.341 00:05:36.341 Suite: memory 00:05:36.341 Test: alloc and free memory map ...[2024-11-19 23:30:10.538651] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:36.341 passed 00:05:36.341 Test: mem map translation ...[2024-11-19 23:30:10.559340] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:36.341 [2024-11-19 23:30:10.559364] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:36.341 [2024-11-19 23:30:10.559421] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:36.341 [2024-11-19 23:30:10.559433] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:36.341 passed 00:05:36.341 Test: mem map registration ...[2024-11-19 23:30:10.601494] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:36.341 [2024-11-19 23:30:10.601513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:36.341 passed 00:05:36.601 Test: mem map adjacent registrations ...passed 00:05:36.601 00:05:36.601 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.601 suites 1 1 n/a 0 0 00:05:36.601 tests 4 4 4 0 0 00:05:36.601 asserts 152 152 152 0 n/a 00:05:36.601 00:05:36.601 Elapsed time = 0.142 seconds 00:05:36.601 00:05:36.601 real 0m0.149s 00:05:36.601 user 0m0.142s 00:05:36.601 sys 0m0.007s 00:05:36.601 23:30:10 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.601 23:30:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:36.601 ************************************ 00:05:36.601 END TEST env_memory 00:05:36.601 ************************************ 00:05:36.601 23:30:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:36.601 23:30:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.601 23:30:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.601 23:30:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.601 ************************************ 00:05:36.601 START TEST env_vtophys 00:05:36.601 ************************************ 00:05:36.601 23:30:10 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:36.601 EAL: lib.eal log level changed from notice to debug 00:05:36.601 EAL: Detected lcore 0 as core 0 on socket 0 00:05:36.601 EAL: Detected lcore 1 as core 1 on socket 0 00:05:36.601 EAL: Detected lcore 2 as core 2 on socket 0 00:05:36.601 EAL: Detected lcore 3 as core 3 on socket 0 00:05:36.601 EAL: Detected lcore 4 as core 4 on socket 0 00:05:36.601 EAL: Detected lcore 5 as core 5 on socket 0 00:05:36.601 EAL: Detected lcore 6 as core 8 on socket 0 00:05:36.601 EAL: Detected lcore 7 as core 9 on socket 0 00:05:36.601 EAL: Detected lcore 8 as core 10 on socket 0 00:05:36.601 EAL: Detected lcore 9 as core 11 on socket 0 00:05:36.601 EAL: Detected lcore 10 as core 12 on socket 0 00:05:36.601 EAL: Detected lcore 11 as core 13 on socket 0 00:05:36.601 EAL: Detected lcore 12 as core 0 on socket 1 00:05:36.601 EAL: Detected lcore 13 as core 1 on socket 1 00:05:36.601 EAL: Detected lcore 14 as core 2 on socket 1 00:05:36.601 EAL: Detected lcore 15 as core 3 on socket 1 00:05:36.601 EAL: Detected lcore 16 as core 4 on socket 1 00:05:36.601 EAL: Detected lcore 17 as core 5 on socket 1 00:05:36.601 EAL: Detected lcore 18 as core 8 on socket 1 00:05:36.601 EAL: Detected lcore 19 as core 9 on socket 1 00:05:36.601 EAL: Detected lcore 20 as core 10 on socket 1 00:05:36.601 EAL: Detected lcore 21 as core 11 on socket 1 00:05:36.601 EAL: Detected lcore 22 as core 12 on socket 1 00:05:36.601 EAL: Detected lcore 23 as core 13 on socket 1 00:05:36.601 EAL: Detected lcore 24 as core 0 on socket 0 00:05:36.601 EAL: Detected lcore 25 as core 1 on socket 0 00:05:36.601 EAL: Detected lcore 26 as core 2 on socket 0 00:05:36.601 EAL: Detected lcore 27 as core 3 on socket 0 00:05:36.601 EAL: Detected lcore 28 as core 4 on socket 0 00:05:36.601 EAL: Detected lcore 29 as core 5 on socket 0 00:05:36.601 EAL: Detected lcore 30 as core 8 on socket 0 00:05:36.601 EAL: Detected lcore 31 as core 9 on socket 0 00:05:36.601 EAL: Detected lcore 32 as core 10 on socket 0 00:05:36.601 EAL: Detected lcore 33 as core 11 on socket 0 00:05:36.601 EAL: Detected lcore 34 as core 12 on socket 0 00:05:36.601 EAL: Detected lcore 35 as core 13 on socket 0 00:05:36.601 EAL: Detected lcore 36 as core 0 on socket 1 00:05:36.601 EAL: Detected lcore 37 as core 1 on socket 1 00:05:36.601 EAL: Detected lcore 38 as core 2 on socket 1 00:05:36.601 EAL: Detected lcore 39 as core 3 on socket 1 00:05:36.601 EAL: Detected lcore 40 as core 4 on socket 1 00:05:36.601 EAL: Detected lcore 41 as core 5 on socket 1 00:05:36.601 EAL: Detected lcore 42 as core 8 on socket 1 00:05:36.601 EAL: Detected lcore 43 as core 9 on socket 1 00:05:36.601 EAL: Detected lcore 44 as core 10 on socket 1 00:05:36.601 EAL: Detected lcore 45 as core 11 on socket 1 00:05:36.601 EAL: Detected lcore 46 as core 12 on socket 1 00:05:36.601 EAL: Detected lcore 47 as core 13 on socket 1 00:05:36.601 EAL: Maximum logical cores by configuration: 128 00:05:36.601 EAL: Detected CPU lcores: 48 00:05:36.601 EAL: Detected NUMA nodes: 2 00:05:36.601 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:36.601 EAL: Detected shared linkage of DPDK 00:05:36.601 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:36.601 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:36.601 EAL: Registered [vdev] bus. 00:05:36.601 EAL: bus.vdev log level changed from disabled to notice 00:05:36.601 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:36.601 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:36.601 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:36.601 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:36.601 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:36.601 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:36.601 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:36.601 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:36.601 EAL: No shared files mode enabled, IPC will be disabled 00:05:36.601 EAL: No shared files mode enabled, IPC is disabled 00:05:36.601 EAL: Bus pci wants IOVA as 'DC' 00:05:36.601 EAL: Bus vdev wants IOVA as 'DC' 00:05:36.601 EAL: Buses did not request a specific IOVA mode. 00:05:36.601 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:36.601 EAL: Selected IOVA mode 'VA' 00:05:36.601 EAL: Probing VFIO support... 00:05:36.601 EAL: IOMMU type 1 (Type 1) is supported 00:05:36.601 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:36.601 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:36.601 EAL: VFIO support initialized 00:05:36.601 EAL: Ask a virtual area of 0x2e000 bytes 00:05:36.601 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:36.601 EAL: Setting up physically contiguous memory... 00:05:36.601 EAL: Setting maximum number of open files to 524288 00:05:36.601 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:36.601 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:36.601 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:36.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.601 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:36.601 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.601 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:36.601 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:36.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.601 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:36.601 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.601 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:36.601 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:36.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.601 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:36.601 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.601 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:36.601 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:36.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.601 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:36.601 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.601 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:36.601 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:36.601 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:36.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.601 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:36.601 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.601 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:36.601 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:36.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.601 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:36.601 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.601 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:36.601 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:36.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.601 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:36.601 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.601 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:36.601 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:36.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.601 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:36.601 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:36.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.601 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:36.601 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:36.601 EAL: Hugepages will be freed exactly as allocated. 00:05:36.601 EAL: No shared files mode enabled, IPC is disabled 00:05:36.601 EAL: No shared files mode enabled, IPC is disabled 00:05:36.601 EAL: TSC frequency is ~2700000 KHz 00:05:36.601 EAL: Main lcore 0 is ready (tid=7ff18de44a00;cpuset=[0]) 00:05:36.602 EAL: Trying to obtain current memory policy. 00:05:36.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.602 EAL: Restoring previous memory policy: 0 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was expanded by 2MB 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:36.602 EAL: Mem event callback 'spdk:(nil)' registered 00:05:36.602 00:05:36.602 00:05:36.602 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.602 http://cunit.sourceforge.net/ 00:05:36.602 00:05:36.602 00:05:36.602 Suite: components_suite 00:05:36.602 Test: vtophys_malloc_test ...passed 00:05:36.602 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:36.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.602 EAL: Restoring previous memory policy: 4 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was expanded by 4MB 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was shrunk by 4MB 00:05:36.602 EAL: Trying to obtain current memory policy. 00:05:36.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.602 EAL: Restoring previous memory policy: 4 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was expanded by 6MB 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was shrunk by 6MB 00:05:36.602 EAL: Trying to obtain current memory policy. 00:05:36.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.602 EAL: Restoring previous memory policy: 4 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was expanded by 10MB 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was shrunk by 10MB 00:05:36.602 EAL: Trying to obtain current memory policy. 00:05:36.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.602 EAL: Restoring previous memory policy: 4 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was expanded by 18MB 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was shrunk by 18MB 00:05:36.602 EAL: Trying to obtain current memory policy. 00:05:36.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.602 EAL: Restoring previous memory policy: 4 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was expanded by 34MB 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was shrunk by 34MB 00:05:36.602 EAL: Trying to obtain current memory policy. 00:05:36.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.602 EAL: Restoring previous memory policy: 4 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was expanded by 66MB 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was shrunk by 66MB 00:05:36.602 EAL: Trying to obtain current memory policy. 00:05:36.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.602 EAL: Restoring previous memory policy: 4 00:05:36.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.602 EAL: request: mp_malloc_sync 00:05:36.602 EAL: No shared files mode enabled, IPC is disabled 00:05:36.602 EAL: Heap on socket 0 was expanded by 130MB 00:05:36.860 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.860 EAL: request: mp_malloc_sync 00:05:36.860 EAL: No shared files mode enabled, IPC is disabled 00:05:36.860 EAL: Heap on socket 0 was shrunk by 130MB 00:05:36.860 EAL: Trying to obtain current memory policy. 00:05:36.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.860 EAL: Restoring previous memory policy: 4 00:05:36.860 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.860 EAL: request: mp_malloc_sync 00:05:36.860 EAL: No shared files mode enabled, IPC is disabled 00:05:36.860 EAL: Heap on socket 0 was expanded by 258MB 00:05:36.860 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.860 EAL: request: mp_malloc_sync 00:05:36.860 EAL: No shared files mode enabled, IPC is disabled 00:05:36.860 EAL: Heap on socket 0 was shrunk by 258MB 00:05:36.860 EAL: Trying to obtain current memory policy. 00:05:36.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.118 EAL: Restoring previous memory policy: 4 00:05:37.118 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.118 EAL: request: mp_malloc_sync 00:05:37.118 EAL: No shared files mode enabled, IPC is disabled 00:05:37.118 EAL: Heap on socket 0 was expanded by 514MB 00:05:37.118 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.376 EAL: request: mp_malloc_sync 00:05:37.376 EAL: No shared files mode enabled, IPC is disabled 00:05:37.376 EAL: Heap on socket 0 was shrunk by 514MB 00:05:37.376 EAL: Trying to obtain current memory policy. 00:05:37.376 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.633 EAL: Restoring previous memory policy: 4 00:05:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.633 EAL: request: mp_malloc_sync 00:05:37.633 EAL: No shared files mode enabled, IPC is disabled 00:05:37.633 EAL: Heap on socket 0 was expanded by 1026MB 00:05:37.892 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.150 EAL: request: mp_malloc_sync 00:05:38.150 EAL: No shared files mode enabled, IPC is disabled 00:05:38.150 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.150 passed 00:05:38.150 00:05:38.150 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.150 suites 1 1 n/a 0 0 00:05:38.150 tests 2 2 2 0 0 00:05:38.150 asserts 497 497 497 0 n/a 00:05:38.150 00:05:38.150 Elapsed time = 1.407 seconds 00:05:38.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.150 EAL: request: mp_malloc_sync 00:05:38.150 EAL: No shared files mode enabled, IPC is disabled 00:05:38.150 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.150 EAL: No shared files mode enabled, IPC is disabled 00:05:38.150 EAL: No shared files mode enabled, IPC is disabled 00:05:38.150 EAL: No shared files mode enabled, IPC is disabled 00:05:38.150 00:05:38.150 real 0m1.534s 00:05:38.150 user 0m0.893s 00:05:38.150 sys 0m0.608s 00:05:38.150 23:30:12 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.150 23:30:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:38.150 ************************************ 00:05:38.150 END TEST env_vtophys 00:05:38.150 ************************************ 00:05:38.150 23:30:12 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.150 23:30:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.150 23:30:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.150 23:30:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.150 ************************************ 00:05:38.150 START TEST env_pci 00:05:38.150 ************************************ 00:05:38.150 23:30:12 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.150 00:05:38.150 00:05:38.150 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.150 http://cunit.sourceforge.net/ 00:05:38.150 00:05:38.150 00:05:38.150 Suite: pci 00:05:38.150 Test: pci_hook ...[2024-11-19 23:30:12.299215] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 37454 has claimed it 00:05:38.150 EAL: Cannot find device (10000:00:01.0) 00:05:38.150 EAL: Failed to attach device on primary process 00:05:38.150 passed 00:05:38.150 00:05:38.150 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.150 suites 1 1 n/a 0 0 00:05:38.150 tests 1 1 1 0 0 00:05:38.150 asserts 25 25 25 0 n/a 00:05:38.150 00:05:38.150 Elapsed time = 0.021 seconds 00:05:38.150 00:05:38.150 real 0m0.034s 00:05:38.150 user 0m0.008s 00:05:38.150 sys 0m0.025s 00:05:38.150 23:30:12 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.150 23:30:12 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:38.150 ************************************ 00:05:38.150 END TEST env_pci 00:05:38.150 ************************************ 00:05:38.150 23:30:12 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:38.150 23:30:12 env -- env/env.sh@15 -- # uname 00:05:38.150 23:30:12 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:38.150 23:30:12 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:38.150 23:30:12 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.150 23:30:12 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:38.150 23:30:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.150 23:30:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.150 ************************************ 00:05:38.150 START TEST env_dpdk_post_init 00:05:38.150 ************************************ 00:05:38.150 23:30:12 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.150 EAL: Detected CPU lcores: 48 00:05:38.150 EAL: Detected NUMA nodes: 2 00:05:38.150 EAL: Detected shared linkage of DPDK 00:05:38.150 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.150 EAL: Selected IOVA mode 'VA' 00:05:38.150 EAL: VFIO support initialized 00:05:38.150 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.409 EAL: Using IOMMU type 1 (Type 1) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:38.409 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:39.343 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:42.677 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:42.677 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:42.677 Starting DPDK initialization... 00:05:42.677 Starting SPDK post initialization... 00:05:42.677 SPDK NVMe probe 00:05:42.677 Attaching to 0000:88:00.0 00:05:42.677 Attached to 0000:88:00.0 00:05:42.677 Cleaning up... 00:05:42.677 00:05:42.677 real 0m4.408s 00:05:42.677 user 0m3.285s 00:05:42.677 sys 0m0.182s 00:05:42.677 23:30:16 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.678 23:30:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:42.678 ************************************ 00:05:42.678 END TEST env_dpdk_post_init 00:05:42.678 ************************************ 00:05:42.678 23:30:16 env -- env/env.sh@26 -- # uname 00:05:42.678 23:30:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:42.678 23:30:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.678 23:30:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.678 23:30:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.678 23:30:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.678 ************************************ 00:05:42.678 START TEST env_mem_callbacks 00:05:42.678 ************************************ 00:05:42.678 23:30:16 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.678 EAL: Detected CPU lcores: 48 00:05:42.678 EAL: Detected NUMA nodes: 2 00:05:42.678 EAL: Detected shared linkage of DPDK 00:05:42.678 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:42.678 EAL: Selected IOVA mode 'VA' 00:05:42.678 EAL: VFIO support initialized 00:05:42.678 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:42.678 00:05:42.678 00:05:42.678 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.678 http://cunit.sourceforge.net/ 00:05:42.678 00:05:42.678 00:05:42.678 Suite: memory 00:05:42.678 Test: test ... 00:05:42.678 register 0x200000200000 2097152 00:05:42.678 malloc 3145728 00:05:42.678 register 0x200000400000 4194304 00:05:42.678 buf 0x200000500000 len 3145728 PASSED 00:05:42.678 malloc 64 00:05:42.678 buf 0x2000004fff40 len 64 PASSED 00:05:42.678 malloc 4194304 00:05:42.678 register 0x200000800000 6291456 00:05:42.678 buf 0x200000a00000 len 4194304 PASSED 00:05:42.678 free 0x200000500000 3145728 00:05:42.678 free 0x2000004fff40 64 00:05:42.678 unregister 0x200000400000 4194304 PASSED 00:05:42.678 free 0x200000a00000 4194304 00:05:42.678 unregister 0x200000800000 6291456 PASSED 00:05:42.678 malloc 8388608 00:05:42.678 register 0x200000400000 10485760 00:05:42.678 buf 0x200000600000 len 8388608 PASSED 00:05:42.678 free 0x200000600000 8388608 00:05:42.678 unregister 0x200000400000 10485760 PASSED 00:05:42.678 passed 00:05:42.678 00:05:42.678 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.678 suites 1 1 n/a 0 0 00:05:42.678 tests 1 1 1 0 0 00:05:42.678 asserts 15 15 15 0 n/a 00:05:42.678 00:05:42.678 Elapsed time = 0.005 seconds 00:05:42.678 00:05:42.678 real 0m0.046s 00:05:42.678 user 0m0.010s 00:05:42.678 sys 0m0.036s 00:05:42.678 23:30:16 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.678 23:30:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:42.678 ************************************ 00:05:42.678 END TEST env_mem_callbacks 00:05:42.678 ************************************ 00:05:42.678 00:05:42.678 real 0m6.563s 00:05:42.678 user 0m4.521s 00:05:42.678 sys 0m1.088s 00:05:42.678 23:30:16 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.678 23:30:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.678 ************************************ 00:05:42.678 END TEST env 00:05:42.678 ************************************ 00:05:42.678 23:30:16 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:42.678 23:30:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.678 23:30:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.678 23:30:16 -- common/autotest_common.sh@10 -- # set +x 00:05:42.678 ************************************ 00:05:42.678 START TEST rpc 00:05:42.678 ************************************ 00:05:42.678 23:30:16 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:42.937 * Looking for test storage... 00:05:42.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:42.937 23:30:16 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:42.937 23:30:16 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:42.937 23:30:16 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:42.937 23:30:17 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.937 23:30:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.937 23:30:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.937 23:30:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.937 23:30:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.937 23:30:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.937 23:30:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.937 23:30:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.937 23:30:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.937 23:30:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.937 23:30:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.937 23:30:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.937 23:30:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:42.937 23:30:17 rpc -- scripts/common.sh@345 -- # : 1 00:05:42.937 23:30:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.937 23:30:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.937 23:30:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:42.937 23:30:17 rpc -- scripts/common.sh@353 -- # local d=1 00:05:42.937 23:30:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.937 23:30:17 rpc -- scripts/common.sh@355 -- # echo 1 00:05:42.937 23:30:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.937 23:30:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:42.937 23:30:17 rpc -- scripts/common.sh@353 -- # local d=2 00:05:42.937 23:30:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.937 23:30:17 rpc -- scripts/common.sh@355 -- # echo 2 00:05:42.937 23:30:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.937 23:30:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.937 23:30:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.937 23:30:17 rpc -- scripts/common.sh@368 -- # return 0 00:05:42.937 23:30:17 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.937 23:30:17 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.937 --rc genhtml_branch_coverage=1 00:05:42.937 --rc genhtml_function_coverage=1 00:05:42.937 --rc genhtml_legend=1 00:05:42.937 --rc geninfo_all_blocks=1 00:05:42.937 --rc geninfo_unexecuted_blocks=1 00:05:42.937 00:05:42.937 ' 00:05:42.937 23:30:17 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.937 --rc genhtml_branch_coverage=1 00:05:42.937 --rc genhtml_function_coverage=1 00:05:42.937 --rc genhtml_legend=1 00:05:42.937 --rc geninfo_all_blocks=1 00:05:42.937 --rc geninfo_unexecuted_blocks=1 00:05:42.937 00:05:42.937 ' 00:05:42.937 23:30:17 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.937 --rc genhtml_branch_coverage=1 00:05:42.937 --rc genhtml_function_coverage=1 00:05:42.937 --rc genhtml_legend=1 00:05:42.937 --rc geninfo_all_blocks=1 00:05:42.937 --rc geninfo_unexecuted_blocks=1 00:05:42.937 00:05:42.937 ' 00:05:42.937 23:30:17 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.938 --rc genhtml_branch_coverage=1 00:05:42.938 --rc genhtml_function_coverage=1 00:05:42.938 --rc genhtml_legend=1 00:05:42.938 --rc geninfo_all_blocks=1 00:05:42.938 --rc geninfo_unexecuted_blocks=1 00:05:42.938 00:05:42.938 ' 00:05:42.938 23:30:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=38248 00:05:42.938 23:30:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:42.938 23:30:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.938 23:30:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 38248 00:05:42.938 23:30:17 rpc -- common/autotest_common.sh@835 -- # '[' -z 38248 ']' 00:05:42.938 23:30:17 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.938 23:30:17 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.938 23:30:17 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.938 23:30:17 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.938 23:30:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.938 [2024-11-19 23:30:17.140017] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:05:42.938 [2024-11-19 23:30:17.140134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid38248 ] 00:05:42.938 [2024-11-19 23:30:17.210747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.196 [2024-11-19 23:30:17.259522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:43.196 [2024-11-19 23:30:17.259578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 38248' to capture a snapshot of events at runtime. 00:05:43.196 [2024-11-19 23:30:17.259604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:43.196 [2024-11-19 23:30:17.259617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:43.196 [2024-11-19 23:30:17.259628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid38248 for offline analysis/debug. 00:05:43.196 [2024-11-19 23:30:17.260281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.455 23:30:17 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.455 23:30:17 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:43.455 23:30:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.455 23:30:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.455 23:30:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:43.455 23:30:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:43.455 23:30:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.455 23:30:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.455 23:30:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.455 ************************************ 00:05:43.455 START TEST rpc_integrity 00:05:43.455 ************************************ 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:43.455 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.455 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.455 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:43.455 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:43.455 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.455 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:43.455 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.455 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:43.455 { 00:05:43.455 "name": "Malloc0", 00:05:43.455 "aliases": [ 00:05:43.455 "767a97d5-275b-46d7-888b-d47458da438e" 00:05:43.455 ], 00:05:43.455 "product_name": "Malloc disk", 00:05:43.455 "block_size": 512, 00:05:43.455 "num_blocks": 16384, 00:05:43.455 "uuid": "767a97d5-275b-46d7-888b-d47458da438e", 00:05:43.455 "assigned_rate_limits": { 00:05:43.455 "rw_ios_per_sec": 0, 00:05:43.455 "rw_mbytes_per_sec": 0, 00:05:43.455 "r_mbytes_per_sec": 0, 00:05:43.455 "w_mbytes_per_sec": 0 00:05:43.455 }, 00:05:43.455 "claimed": false, 00:05:43.455 "zoned": false, 00:05:43.455 "supported_io_types": { 00:05:43.455 "read": true, 00:05:43.455 "write": true, 00:05:43.455 "unmap": true, 00:05:43.455 "flush": true, 00:05:43.455 "reset": true, 00:05:43.455 "nvme_admin": false, 00:05:43.455 "nvme_io": false, 00:05:43.455 "nvme_io_md": false, 00:05:43.455 "write_zeroes": true, 00:05:43.455 "zcopy": true, 00:05:43.455 "get_zone_info": false, 00:05:43.455 "zone_management": false, 00:05:43.455 "zone_append": false, 00:05:43.455 "compare": false, 00:05:43.455 "compare_and_write": false, 00:05:43.455 "abort": true, 00:05:43.455 "seek_hole": false, 00:05:43.455 "seek_data": false, 00:05:43.455 "copy": true, 00:05:43.455 "nvme_iov_md": false 00:05:43.455 }, 00:05:43.455 "memory_domains": [ 00:05:43.455 { 00:05:43.455 "dma_device_id": "system", 00:05:43.455 "dma_device_type": 1 00:05:43.455 }, 00:05:43.455 { 00:05:43.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.455 "dma_device_type": 2 00:05:43.455 } 00:05:43.455 ], 00:05:43.455 "driver_specific": {} 00:05:43.455 } 00:05:43.455 ]' 00:05:43.455 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:43.455 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:43.455 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.455 [2024-11-19 23:30:17.660818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:43.455 [2024-11-19 23:30:17.660863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.455 [2024-11-19 23:30:17.660889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7b68e0 00:05:43.455 [2024-11-19 23:30:17.660905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.455 [2024-11-19 23:30:17.662488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.455 [2024-11-19 23:30:17.662519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:43.455 Passthru0 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.455 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.455 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.456 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.456 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:43.456 { 00:05:43.456 "name": "Malloc0", 00:05:43.456 "aliases": [ 00:05:43.456 "767a97d5-275b-46d7-888b-d47458da438e" 00:05:43.456 ], 00:05:43.456 "product_name": "Malloc disk", 00:05:43.456 "block_size": 512, 00:05:43.456 "num_blocks": 16384, 00:05:43.456 "uuid": "767a97d5-275b-46d7-888b-d47458da438e", 00:05:43.456 "assigned_rate_limits": { 00:05:43.456 "rw_ios_per_sec": 0, 00:05:43.456 "rw_mbytes_per_sec": 0, 00:05:43.456 "r_mbytes_per_sec": 0, 00:05:43.456 "w_mbytes_per_sec": 0 00:05:43.456 }, 00:05:43.456 "claimed": true, 00:05:43.456 "claim_type": "exclusive_write", 00:05:43.456 "zoned": false, 00:05:43.456 "supported_io_types": { 00:05:43.456 "read": true, 00:05:43.456 "write": true, 00:05:43.456 "unmap": true, 00:05:43.456 "flush": true, 00:05:43.456 "reset": true, 00:05:43.456 "nvme_admin": false, 00:05:43.456 "nvme_io": false, 00:05:43.456 "nvme_io_md": false, 00:05:43.456 "write_zeroes": true, 00:05:43.456 "zcopy": true, 00:05:43.456 "get_zone_info": false, 00:05:43.456 "zone_management": false, 00:05:43.456 "zone_append": false, 00:05:43.456 "compare": false, 00:05:43.456 "compare_and_write": false, 00:05:43.456 "abort": true, 00:05:43.456 "seek_hole": false, 00:05:43.456 "seek_data": false, 00:05:43.456 "copy": true, 00:05:43.456 "nvme_iov_md": false 00:05:43.456 }, 00:05:43.456 "memory_domains": [ 00:05:43.456 { 00:05:43.456 "dma_device_id": "system", 00:05:43.456 "dma_device_type": 1 00:05:43.456 }, 00:05:43.456 { 00:05:43.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.456 "dma_device_type": 2 00:05:43.456 } 00:05:43.456 ], 00:05:43.456 "driver_specific": {} 00:05:43.456 }, 00:05:43.456 { 00:05:43.456 "name": "Passthru0", 00:05:43.456 "aliases": [ 00:05:43.456 "45b13b6b-1c0b-5da7-b5f7-2d02fc605278" 00:05:43.456 ], 00:05:43.456 "product_name": "passthru", 00:05:43.456 "block_size": 512, 00:05:43.456 "num_blocks": 16384, 00:05:43.456 "uuid": "45b13b6b-1c0b-5da7-b5f7-2d02fc605278", 00:05:43.456 "assigned_rate_limits": { 00:05:43.456 "rw_ios_per_sec": 0, 00:05:43.456 "rw_mbytes_per_sec": 0, 00:05:43.456 "r_mbytes_per_sec": 0, 00:05:43.456 "w_mbytes_per_sec": 0 00:05:43.456 }, 00:05:43.456 "claimed": false, 00:05:43.456 "zoned": false, 00:05:43.456 "supported_io_types": { 00:05:43.456 "read": true, 00:05:43.456 "write": true, 00:05:43.456 "unmap": true, 00:05:43.456 "flush": true, 00:05:43.456 "reset": true, 00:05:43.456 "nvme_admin": false, 00:05:43.456 "nvme_io": false, 00:05:43.456 "nvme_io_md": false, 00:05:43.456 "write_zeroes": true, 00:05:43.456 "zcopy": true, 00:05:43.456 "get_zone_info": false, 00:05:43.456 "zone_management": false, 00:05:43.456 "zone_append": false, 00:05:43.456 "compare": false, 00:05:43.456 "compare_and_write": false, 00:05:43.456 "abort": true, 00:05:43.456 "seek_hole": false, 00:05:43.456 "seek_data": false, 00:05:43.456 "copy": true, 00:05:43.456 "nvme_iov_md": false 00:05:43.456 }, 00:05:43.456 "memory_domains": [ 00:05:43.456 { 00:05:43.456 "dma_device_id": "system", 00:05:43.456 "dma_device_type": 1 00:05:43.456 }, 00:05:43.456 { 00:05:43.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.456 "dma_device_type": 2 00:05:43.456 } 00:05:43.456 ], 00:05:43.456 "driver_specific": { 00:05:43.456 "passthru": { 00:05:43.456 "name": "Passthru0", 00:05:43.456 "base_bdev_name": "Malloc0" 00:05:43.456 } 00:05:43.456 } 00:05:43.456 } 00:05:43.456 ]' 00:05:43.456 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:43.456 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:43.456 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:43.456 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.456 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.456 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.456 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:43.456 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.456 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.456 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.456 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.456 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.456 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.456 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.456 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.456 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:43.714 23:30:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.714 00:05:43.714 real 0m0.226s 00:05:43.714 user 0m0.149s 00:05:43.714 sys 0m0.024s 00:05:43.714 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.715 23:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.715 ************************************ 00:05:43.715 END TEST rpc_integrity 00:05:43.715 ************************************ 00:05:43.715 23:30:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:43.715 23:30:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.715 23:30:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.715 23:30:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.715 ************************************ 00:05:43.715 START TEST rpc_plugins 00:05:43.715 ************************************ 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:43.715 23:30:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.715 23:30:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:43.715 23:30:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.715 23:30:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:43.715 { 00:05:43.715 "name": "Malloc1", 00:05:43.715 "aliases": [ 00:05:43.715 "708db069-641b-4e4e-a3c5-05b9992c5c16" 00:05:43.715 ], 00:05:43.715 "product_name": "Malloc disk", 00:05:43.715 "block_size": 4096, 00:05:43.715 "num_blocks": 256, 00:05:43.715 "uuid": "708db069-641b-4e4e-a3c5-05b9992c5c16", 00:05:43.715 "assigned_rate_limits": { 00:05:43.715 "rw_ios_per_sec": 0, 00:05:43.715 "rw_mbytes_per_sec": 0, 00:05:43.715 "r_mbytes_per_sec": 0, 00:05:43.715 "w_mbytes_per_sec": 0 00:05:43.715 }, 00:05:43.715 "claimed": false, 00:05:43.715 "zoned": false, 00:05:43.715 "supported_io_types": { 00:05:43.715 "read": true, 00:05:43.715 "write": true, 00:05:43.715 "unmap": true, 00:05:43.715 "flush": true, 00:05:43.715 "reset": true, 00:05:43.715 "nvme_admin": false, 00:05:43.715 "nvme_io": false, 00:05:43.715 "nvme_io_md": false, 00:05:43.715 "write_zeroes": true, 00:05:43.715 "zcopy": true, 00:05:43.715 "get_zone_info": false, 00:05:43.715 "zone_management": false, 00:05:43.715 "zone_append": false, 00:05:43.715 "compare": false, 00:05:43.715 "compare_and_write": false, 00:05:43.715 "abort": true, 00:05:43.715 "seek_hole": false, 00:05:43.715 "seek_data": false, 00:05:43.715 "copy": true, 00:05:43.715 "nvme_iov_md": false 00:05:43.715 }, 00:05:43.715 "memory_domains": [ 00:05:43.715 { 00:05:43.715 "dma_device_id": "system", 00:05:43.715 "dma_device_type": 1 00:05:43.715 }, 00:05:43.715 { 00:05:43.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.715 "dma_device_type": 2 00:05:43.715 } 00:05:43.715 ], 00:05:43.715 "driver_specific": {} 00:05:43.715 } 00:05:43.715 ]' 00:05:43.715 23:30:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:43.715 23:30:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:43.715 23:30:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.715 23:30:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.715 23:30:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:43.715 23:30:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:43.715 23:30:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:43.715 00:05:43.715 real 0m0.117s 00:05:43.715 user 0m0.076s 00:05:43.715 sys 0m0.011s 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.715 23:30:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.715 ************************************ 00:05:43.715 END TEST rpc_plugins 00:05:43.715 ************************************ 00:05:43.715 23:30:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:43.715 23:30:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.715 23:30:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.715 23:30:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.715 ************************************ 00:05:43.715 START TEST rpc_trace_cmd_test 00:05:43.715 ************************************ 00:05:43.715 23:30:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:43.715 23:30:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:43.715 23:30:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:43.715 23:30:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.715 23:30:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.715 23:30:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.715 23:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:43.715 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid38248", 00:05:43.715 "tpoint_group_mask": "0x8", 00:05:43.715 "iscsi_conn": { 00:05:43.715 "mask": "0x2", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "scsi": { 00:05:43.715 "mask": "0x4", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "bdev": { 00:05:43.715 "mask": "0x8", 00:05:43.715 "tpoint_mask": "0xffffffffffffffff" 00:05:43.715 }, 00:05:43.715 "nvmf_rdma": { 00:05:43.715 "mask": "0x10", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "nvmf_tcp": { 00:05:43.715 "mask": "0x20", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "ftl": { 00:05:43.715 "mask": "0x40", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "blobfs": { 00:05:43.715 "mask": "0x80", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "dsa": { 00:05:43.715 "mask": "0x200", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "thread": { 00:05:43.715 "mask": "0x400", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "nvme_pcie": { 00:05:43.715 "mask": "0x800", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "iaa": { 00:05:43.715 "mask": "0x1000", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "nvme_tcp": { 00:05:43.715 "mask": "0x2000", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "bdev_nvme": { 00:05:43.715 "mask": "0x4000", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "sock": { 00:05:43.715 "mask": "0x8000", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "blob": { 00:05:43.715 "mask": "0x10000", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "bdev_raid": { 00:05:43.715 "mask": "0x20000", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 }, 00:05:43.715 "scheduler": { 00:05:43.715 "mask": "0x40000", 00:05:43.715 "tpoint_mask": "0x0" 00:05:43.715 } 00:05:43.715 }' 00:05:43.715 23:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:43.974 23:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:43.974 23:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:43.974 23:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:43.974 23:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:43.974 23:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:43.974 23:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:43.974 23:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:43.974 23:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:43.974 23:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:43.974 00:05:43.974 real 0m0.207s 00:05:43.974 user 0m0.185s 00:05:43.974 sys 0m0.014s 00:05:43.974 23:30:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.974 23:30:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.974 ************************************ 00:05:43.974 END TEST rpc_trace_cmd_test 00:05:43.974 ************************************ 00:05:43.974 23:30:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:43.974 23:30:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:43.974 23:30:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:43.974 23:30:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.974 23:30:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.974 23:30:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.974 ************************************ 00:05:43.974 START TEST rpc_daemon_integrity 00:05:43.974 ************************************ 00:05:43.974 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:43.974 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.974 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.974 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.974 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.974 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.974 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:44.233 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:44.233 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.233 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.233 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:44.234 { 00:05:44.234 "name": "Malloc2", 00:05:44.234 "aliases": [ 00:05:44.234 "b4cb0d42-1944-4c52-a94d-aad7ac74c3e2" 00:05:44.234 ], 00:05:44.234 "product_name": "Malloc disk", 00:05:44.234 "block_size": 512, 00:05:44.234 "num_blocks": 16384, 00:05:44.234 "uuid": "b4cb0d42-1944-4c52-a94d-aad7ac74c3e2", 00:05:44.234 "assigned_rate_limits": { 00:05:44.234 "rw_ios_per_sec": 0, 00:05:44.234 "rw_mbytes_per_sec": 0, 00:05:44.234 "r_mbytes_per_sec": 0, 00:05:44.234 "w_mbytes_per_sec": 0 00:05:44.234 }, 00:05:44.234 "claimed": false, 00:05:44.234 "zoned": false, 00:05:44.234 "supported_io_types": { 00:05:44.234 "read": true, 00:05:44.234 "write": true, 00:05:44.234 "unmap": true, 00:05:44.234 "flush": true, 00:05:44.234 "reset": true, 00:05:44.234 "nvme_admin": false, 00:05:44.234 "nvme_io": false, 00:05:44.234 "nvme_io_md": false, 00:05:44.234 "write_zeroes": true, 00:05:44.234 "zcopy": true, 00:05:44.234 "get_zone_info": false, 00:05:44.234 "zone_management": false, 00:05:44.234 "zone_append": false, 00:05:44.234 "compare": false, 00:05:44.234 "compare_and_write": false, 00:05:44.234 "abort": true, 00:05:44.234 "seek_hole": false, 00:05:44.234 "seek_data": false, 00:05:44.234 "copy": true, 00:05:44.234 "nvme_iov_md": false 00:05:44.234 }, 00:05:44.234 "memory_domains": [ 00:05:44.234 { 00:05:44.234 "dma_device_id": "system", 00:05:44.234 "dma_device_type": 1 00:05:44.234 }, 00:05:44.234 { 00:05:44.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.234 "dma_device_type": 2 00:05:44.234 } 00:05:44.234 ], 00:05:44.234 "driver_specific": {} 00:05:44.234 } 00:05:44.234 ]' 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.234 [2024-11-19 23:30:18.351766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:44.234 [2024-11-19 23:30:18.351812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.234 [2024-11-19 23:30:18.351836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8e66f0 00:05:44.234 [2024-11-19 23:30:18.351853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.234 [2024-11-19 23:30:18.353229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.234 [2024-11-19 23:30:18.353256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:44.234 Passthru0 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.234 { 00:05:44.234 "name": "Malloc2", 00:05:44.234 "aliases": [ 00:05:44.234 "b4cb0d42-1944-4c52-a94d-aad7ac74c3e2" 00:05:44.234 ], 00:05:44.234 "product_name": "Malloc disk", 00:05:44.234 "block_size": 512, 00:05:44.234 "num_blocks": 16384, 00:05:44.234 "uuid": "b4cb0d42-1944-4c52-a94d-aad7ac74c3e2", 00:05:44.234 "assigned_rate_limits": { 00:05:44.234 "rw_ios_per_sec": 0, 00:05:44.234 "rw_mbytes_per_sec": 0, 00:05:44.234 "r_mbytes_per_sec": 0, 00:05:44.234 "w_mbytes_per_sec": 0 00:05:44.234 }, 00:05:44.234 "claimed": true, 00:05:44.234 "claim_type": "exclusive_write", 00:05:44.234 "zoned": false, 00:05:44.234 "supported_io_types": { 00:05:44.234 "read": true, 00:05:44.234 "write": true, 00:05:44.234 "unmap": true, 00:05:44.234 "flush": true, 00:05:44.234 "reset": true, 00:05:44.234 "nvme_admin": false, 00:05:44.234 "nvme_io": false, 00:05:44.234 "nvme_io_md": false, 00:05:44.234 "write_zeroes": true, 00:05:44.234 "zcopy": true, 00:05:44.234 "get_zone_info": false, 00:05:44.234 "zone_management": false, 00:05:44.234 "zone_append": false, 00:05:44.234 "compare": false, 00:05:44.234 "compare_and_write": false, 00:05:44.234 "abort": true, 00:05:44.234 "seek_hole": false, 00:05:44.234 "seek_data": false, 00:05:44.234 "copy": true, 00:05:44.234 "nvme_iov_md": false 00:05:44.234 }, 00:05:44.234 "memory_domains": [ 00:05:44.234 { 00:05:44.234 "dma_device_id": "system", 00:05:44.234 "dma_device_type": 1 00:05:44.234 }, 00:05:44.234 { 00:05:44.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.234 "dma_device_type": 2 00:05:44.234 } 00:05:44.234 ], 00:05:44.234 "driver_specific": {} 00:05:44.234 }, 00:05:44.234 { 00:05:44.234 "name": "Passthru0", 00:05:44.234 "aliases": [ 00:05:44.234 "81967d3b-09b8-534a-b66b-0c204f7459f8" 00:05:44.234 ], 00:05:44.234 "product_name": "passthru", 00:05:44.234 "block_size": 512, 00:05:44.234 "num_blocks": 16384, 00:05:44.234 "uuid": "81967d3b-09b8-534a-b66b-0c204f7459f8", 00:05:44.234 "assigned_rate_limits": { 00:05:44.234 "rw_ios_per_sec": 0, 00:05:44.234 "rw_mbytes_per_sec": 0, 00:05:44.234 "r_mbytes_per_sec": 0, 00:05:44.234 "w_mbytes_per_sec": 0 00:05:44.234 }, 00:05:44.234 "claimed": false, 00:05:44.234 "zoned": false, 00:05:44.234 "supported_io_types": { 00:05:44.234 "read": true, 00:05:44.234 "write": true, 00:05:44.234 "unmap": true, 00:05:44.234 "flush": true, 00:05:44.234 "reset": true, 00:05:44.234 "nvme_admin": false, 00:05:44.234 "nvme_io": false, 00:05:44.234 "nvme_io_md": false, 00:05:44.234 "write_zeroes": true, 00:05:44.234 "zcopy": true, 00:05:44.234 "get_zone_info": false, 00:05:44.234 "zone_management": false, 00:05:44.234 "zone_append": false, 00:05:44.234 "compare": false, 00:05:44.234 "compare_and_write": false, 00:05:44.234 "abort": true, 00:05:44.234 "seek_hole": false, 00:05:44.234 "seek_data": false, 00:05:44.234 "copy": true, 00:05:44.234 "nvme_iov_md": false 00:05:44.234 }, 00:05:44.234 "memory_domains": [ 00:05:44.234 { 00:05:44.234 "dma_device_id": "system", 00:05:44.234 "dma_device_type": 1 00:05:44.234 }, 00:05:44.234 { 00:05:44.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.234 "dma_device_type": 2 00:05:44.234 } 00:05:44.234 ], 00:05:44.234 "driver_specific": { 00:05:44.234 "passthru": { 00:05:44.234 "name": "Passthru0", 00:05:44.234 "base_bdev_name": "Malloc2" 00:05:44.234 } 00:05:44.234 } 00:05:44.234 } 00:05:44.234 ]' 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.234 00:05:44.234 real 0m0.224s 00:05:44.234 user 0m0.154s 00:05:44.234 sys 0m0.015s 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.234 23:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.234 ************************************ 00:05:44.234 END TEST rpc_daemon_integrity 00:05:44.234 ************************************ 00:05:44.234 23:30:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:44.234 23:30:18 rpc -- rpc/rpc.sh@84 -- # killprocess 38248 00:05:44.234 23:30:18 rpc -- common/autotest_common.sh@954 -- # '[' -z 38248 ']' 00:05:44.234 23:30:18 rpc -- common/autotest_common.sh@958 -- # kill -0 38248 00:05:44.235 23:30:18 rpc -- common/autotest_common.sh@959 -- # uname 00:05:44.235 23:30:18 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.235 23:30:18 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 38248 00:05:44.235 23:30:18 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.235 23:30:18 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.235 23:30:18 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 38248' 00:05:44.235 killing process with pid 38248 00:05:44.235 23:30:18 rpc -- common/autotest_common.sh@973 -- # kill 38248 00:05:44.235 23:30:18 rpc -- common/autotest_common.sh@978 -- # wait 38248 00:05:44.801 00:05:44.801 real 0m1.997s 00:05:44.801 user 0m2.492s 00:05:44.801 sys 0m0.620s 00:05:44.801 23:30:18 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.801 23:30:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.801 ************************************ 00:05:44.801 END TEST rpc 00:05:44.801 ************************************ 00:05:44.801 23:30:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:44.801 23:30:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.801 23:30:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.801 23:30:18 -- common/autotest_common.sh@10 -- # set +x 00:05:44.801 ************************************ 00:05:44.801 START TEST skip_rpc 00:05:44.801 ************************************ 00:05:44.801 23:30:18 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:44.801 * Looking for test storage... 00:05:44.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:44.801 23:30:19 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.801 23:30:19 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.801 23:30:19 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.059 23:30:19 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.059 23:30:19 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:45.059 23:30:19 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.059 23:30:19 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.059 --rc genhtml_branch_coverage=1 00:05:45.059 --rc genhtml_function_coverage=1 00:05:45.059 --rc genhtml_legend=1 00:05:45.059 --rc geninfo_all_blocks=1 00:05:45.059 --rc geninfo_unexecuted_blocks=1 00:05:45.059 00:05:45.059 ' 00:05:45.059 23:30:19 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.059 --rc genhtml_branch_coverage=1 00:05:45.059 --rc genhtml_function_coverage=1 00:05:45.059 --rc genhtml_legend=1 00:05:45.059 --rc geninfo_all_blocks=1 00:05:45.059 --rc geninfo_unexecuted_blocks=1 00:05:45.059 00:05:45.059 ' 00:05:45.059 23:30:19 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.059 --rc genhtml_branch_coverage=1 00:05:45.059 --rc genhtml_function_coverage=1 00:05:45.059 --rc genhtml_legend=1 00:05:45.059 --rc geninfo_all_blocks=1 00:05:45.059 --rc geninfo_unexecuted_blocks=1 00:05:45.059 00:05:45.059 ' 00:05:45.059 23:30:19 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.059 --rc genhtml_branch_coverage=1 00:05:45.059 --rc genhtml_function_coverage=1 00:05:45.059 --rc genhtml_legend=1 00:05:45.059 --rc geninfo_all_blocks=1 00:05:45.059 --rc geninfo_unexecuted_blocks=1 00:05:45.059 00:05:45.059 ' 00:05:45.059 23:30:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:45.059 23:30:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:45.059 23:30:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:45.059 23:30:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.059 23:30:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.059 23:30:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.059 ************************************ 00:05:45.059 START TEST skip_rpc 00:05:45.059 ************************************ 00:05:45.060 23:30:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:45.060 23:30:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=38636 00:05:45.060 23:30:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:45.060 23:30:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.060 23:30:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:45.060 [2024-11-19 23:30:19.215217] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:05:45.060 [2024-11-19 23:30:19.215306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid38636 ] 00:05:45.060 [2024-11-19 23:30:19.287399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.060 [2024-11-19 23:30:19.337383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 38636 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 38636 ']' 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 38636 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:50.322 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.323 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 38636 00:05:50.323 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.323 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.323 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 38636' 00:05:50.323 killing process with pid 38636 00:05:50.323 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 38636 00:05:50.323 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 38636 00:05:50.323 00:05:50.323 real 0m5.440s 00:05:50.323 user 0m5.118s 00:05:50.323 sys 0m0.339s 00:05:50.323 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.323 23:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.323 ************************************ 00:05:50.323 END TEST skip_rpc 00:05:50.323 ************************************ 00:05:50.323 23:30:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:50.323 23:30:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.323 23:30:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.323 23:30:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.581 ************************************ 00:05:50.581 START TEST skip_rpc_with_json 00:05:50.581 ************************************ 00:05:50.581 23:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:50.581 23:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:50.581 23:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=39260 00:05:50.581 23:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.581 23:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.581 23:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 39260 00:05:50.581 23:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 39260 ']' 00:05:50.582 23:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.582 23:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.582 23:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.582 23:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.582 23:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.582 [2024-11-19 23:30:24.708062] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:05:50.582 [2024-11-19 23:30:24.708182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39260 ] 00:05:50.582 [2024-11-19 23:30:24.784323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.582 [2024-11-19 23:30:24.836446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.840 [2024-11-19 23:30:25.113250] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:50.840 request: 00:05:50.840 { 00:05:50.840 "trtype": "tcp", 00:05:50.840 "method": "nvmf_get_transports", 00:05:50.840 "req_id": 1 00:05:50.840 } 00:05:50.840 Got JSON-RPC error response 00:05:50.840 response: 00:05:50.840 { 00:05:50.840 "code": -19, 00:05:50.840 "message": "No such device" 00:05:50.840 } 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.840 [2024-11-19 23:30:25.121381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.840 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.098 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.098 23:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:51.098 { 00:05:51.098 "subsystems": [ 00:05:51.098 { 00:05:51.098 "subsystem": "fsdev", 00:05:51.098 "config": [ 00:05:51.098 { 00:05:51.098 "method": "fsdev_set_opts", 00:05:51.098 "params": { 00:05:51.098 "fsdev_io_pool_size": 65535, 00:05:51.098 "fsdev_io_cache_size": 256 00:05:51.098 } 00:05:51.098 } 00:05:51.098 ] 00:05:51.098 }, 00:05:51.098 { 00:05:51.098 "subsystem": "vfio_user_target", 00:05:51.098 "config": null 00:05:51.098 }, 00:05:51.098 { 00:05:51.098 "subsystem": "keyring", 00:05:51.098 "config": [] 00:05:51.098 }, 00:05:51.098 { 00:05:51.098 "subsystem": "iobuf", 00:05:51.098 "config": [ 00:05:51.098 { 00:05:51.098 "method": "iobuf_set_options", 00:05:51.098 "params": { 00:05:51.098 "small_pool_count": 8192, 00:05:51.098 "large_pool_count": 1024, 00:05:51.098 "small_bufsize": 8192, 00:05:51.098 "large_bufsize": 135168, 00:05:51.098 "enable_numa": false 00:05:51.098 } 00:05:51.098 } 00:05:51.098 ] 00:05:51.098 }, 00:05:51.098 { 00:05:51.098 "subsystem": "sock", 00:05:51.098 "config": [ 00:05:51.098 { 00:05:51.098 "method": "sock_set_default_impl", 00:05:51.098 "params": { 00:05:51.098 "impl_name": "posix" 00:05:51.098 } 00:05:51.098 }, 00:05:51.098 { 00:05:51.098 "method": "sock_impl_set_options", 00:05:51.098 "params": { 00:05:51.098 "impl_name": "ssl", 00:05:51.098 "recv_buf_size": 4096, 00:05:51.098 "send_buf_size": 4096, 00:05:51.098 "enable_recv_pipe": true, 00:05:51.098 "enable_quickack": false, 00:05:51.098 "enable_placement_id": 0, 00:05:51.098 "enable_zerocopy_send_server": true, 00:05:51.098 "enable_zerocopy_send_client": false, 00:05:51.098 "zerocopy_threshold": 0, 00:05:51.098 "tls_version": 0, 00:05:51.098 "enable_ktls": false 00:05:51.098 } 00:05:51.098 }, 00:05:51.098 { 00:05:51.098 "method": "sock_impl_set_options", 00:05:51.098 "params": { 00:05:51.098 "impl_name": "posix", 00:05:51.098 "recv_buf_size": 2097152, 00:05:51.098 "send_buf_size": 2097152, 00:05:51.098 "enable_recv_pipe": true, 00:05:51.098 "enable_quickack": false, 00:05:51.098 "enable_placement_id": 0, 00:05:51.098 "enable_zerocopy_send_server": true, 00:05:51.098 "enable_zerocopy_send_client": false, 00:05:51.098 "zerocopy_threshold": 0, 00:05:51.098 "tls_version": 0, 00:05:51.098 "enable_ktls": false 00:05:51.098 } 00:05:51.098 } 00:05:51.098 ] 00:05:51.098 }, 00:05:51.098 { 00:05:51.098 "subsystem": "vmd", 00:05:51.098 "config": [] 00:05:51.098 }, 00:05:51.098 { 00:05:51.098 "subsystem": "accel", 00:05:51.098 "config": [ 00:05:51.098 { 00:05:51.098 "method": "accel_set_options", 00:05:51.098 "params": { 00:05:51.098 "small_cache_size": 128, 00:05:51.098 "large_cache_size": 16, 00:05:51.098 "task_count": 2048, 00:05:51.098 "sequence_count": 2048, 00:05:51.098 "buf_count": 2048 00:05:51.098 } 00:05:51.098 } 00:05:51.098 ] 00:05:51.098 }, 00:05:51.098 { 00:05:51.098 "subsystem": "bdev", 00:05:51.099 "config": [ 00:05:51.099 { 00:05:51.099 "method": "bdev_set_options", 00:05:51.099 "params": { 00:05:51.099 "bdev_io_pool_size": 65535, 00:05:51.099 "bdev_io_cache_size": 256, 00:05:51.099 "bdev_auto_examine": true, 00:05:51.099 "iobuf_small_cache_size": 128, 00:05:51.099 "iobuf_large_cache_size": 16 00:05:51.099 } 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "method": "bdev_raid_set_options", 00:05:51.099 "params": { 00:05:51.099 "process_window_size_kb": 1024, 00:05:51.099 "process_max_bandwidth_mb_sec": 0 00:05:51.099 } 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "method": "bdev_iscsi_set_options", 00:05:51.099 "params": { 00:05:51.099 "timeout_sec": 30 00:05:51.099 } 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "method": "bdev_nvme_set_options", 00:05:51.099 "params": { 00:05:51.099 "action_on_timeout": "none", 00:05:51.099 "timeout_us": 0, 00:05:51.099 "timeout_admin_us": 0, 00:05:51.099 "keep_alive_timeout_ms": 10000, 00:05:51.099 "arbitration_burst": 0, 00:05:51.099 "low_priority_weight": 0, 00:05:51.099 "medium_priority_weight": 0, 00:05:51.099 "high_priority_weight": 0, 00:05:51.099 "nvme_adminq_poll_period_us": 10000, 00:05:51.099 "nvme_ioq_poll_period_us": 0, 00:05:51.099 "io_queue_requests": 0, 00:05:51.099 "delay_cmd_submit": true, 00:05:51.099 "transport_retry_count": 4, 00:05:51.099 "bdev_retry_count": 3, 00:05:51.099 "transport_ack_timeout": 0, 00:05:51.099 "ctrlr_loss_timeout_sec": 0, 00:05:51.099 "reconnect_delay_sec": 0, 00:05:51.099 "fast_io_fail_timeout_sec": 0, 00:05:51.099 "disable_auto_failback": false, 00:05:51.099 "generate_uuids": false, 00:05:51.099 "transport_tos": 0, 00:05:51.099 "nvme_error_stat": false, 00:05:51.099 "rdma_srq_size": 0, 00:05:51.099 "io_path_stat": false, 00:05:51.099 "allow_accel_sequence": false, 00:05:51.099 "rdma_max_cq_size": 0, 00:05:51.099 "rdma_cm_event_timeout_ms": 0, 00:05:51.099 "dhchap_digests": [ 00:05:51.099 "sha256", 00:05:51.099 "sha384", 00:05:51.099 "sha512" 00:05:51.099 ], 00:05:51.099 "dhchap_dhgroups": [ 00:05:51.099 "null", 00:05:51.099 "ffdhe2048", 00:05:51.099 "ffdhe3072", 00:05:51.099 "ffdhe4096", 00:05:51.099 "ffdhe6144", 00:05:51.099 "ffdhe8192" 00:05:51.099 ] 00:05:51.099 } 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "method": "bdev_nvme_set_hotplug", 00:05:51.099 "params": { 00:05:51.099 "period_us": 100000, 00:05:51.099 "enable": false 00:05:51.099 } 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "method": "bdev_wait_for_examine" 00:05:51.099 } 00:05:51.099 ] 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "subsystem": "scsi", 00:05:51.099 "config": null 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "subsystem": "scheduler", 00:05:51.099 "config": [ 00:05:51.099 { 00:05:51.099 "method": "framework_set_scheduler", 00:05:51.099 "params": { 00:05:51.099 "name": "static" 00:05:51.099 } 00:05:51.099 } 00:05:51.099 ] 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "subsystem": "vhost_scsi", 00:05:51.099 "config": [] 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "subsystem": "vhost_blk", 00:05:51.099 "config": [] 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "subsystem": "ublk", 00:05:51.099 "config": [] 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "subsystem": "nbd", 00:05:51.099 "config": [] 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "subsystem": "nvmf", 00:05:51.099 "config": [ 00:05:51.099 { 00:05:51.099 "method": "nvmf_set_config", 00:05:51.099 "params": { 00:05:51.099 "discovery_filter": "match_any", 00:05:51.099 "admin_cmd_passthru": { 00:05:51.099 "identify_ctrlr": false 00:05:51.099 }, 00:05:51.099 "dhchap_digests": [ 00:05:51.099 "sha256", 00:05:51.099 "sha384", 00:05:51.099 "sha512" 00:05:51.099 ], 00:05:51.099 "dhchap_dhgroups": [ 00:05:51.099 "null", 00:05:51.099 "ffdhe2048", 00:05:51.099 "ffdhe3072", 00:05:51.099 "ffdhe4096", 00:05:51.099 "ffdhe6144", 00:05:51.099 "ffdhe8192" 00:05:51.099 ] 00:05:51.099 } 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "method": "nvmf_set_max_subsystems", 00:05:51.099 "params": { 00:05:51.099 "max_subsystems": 1024 00:05:51.099 } 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "method": "nvmf_set_crdt", 00:05:51.099 "params": { 00:05:51.099 "crdt1": 0, 00:05:51.099 "crdt2": 0, 00:05:51.099 "crdt3": 0 00:05:51.099 } 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "method": "nvmf_create_transport", 00:05:51.099 "params": { 00:05:51.099 "trtype": "TCP", 00:05:51.099 "max_queue_depth": 128, 00:05:51.099 "max_io_qpairs_per_ctrlr": 127, 00:05:51.099 "in_capsule_data_size": 4096, 00:05:51.099 "max_io_size": 131072, 00:05:51.099 "io_unit_size": 131072, 00:05:51.099 "max_aq_depth": 128, 00:05:51.099 "num_shared_buffers": 511, 00:05:51.099 "buf_cache_size": 4294967295, 00:05:51.099 "dif_insert_or_strip": false, 00:05:51.099 "zcopy": false, 00:05:51.099 "c2h_success": true, 00:05:51.099 "sock_priority": 0, 00:05:51.099 "abort_timeout_sec": 1, 00:05:51.099 "ack_timeout": 0, 00:05:51.099 "data_wr_pool_size": 0 00:05:51.099 } 00:05:51.099 } 00:05:51.099 ] 00:05:51.099 }, 00:05:51.099 { 00:05:51.099 "subsystem": "iscsi", 00:05:51.099 "config": [ 00:05:51.099 { 00:05:51.099 "method": "iscsi_set_options", 00:05:51.099 "params": { 00:05:51.099 "node_base": "iqn.2016-06.io.spdk", 00:05:51.099 "max_sessions": 128, 00:05:51.099 "max_connections_per_session": 2, 00:05:51.099 "max_queue_depth": 64, 00:05:51.099 "default_time2wait": 2, 00:05:51.099 "default_time2retain": 20, 00:05:51.099 "first_burst_length": 8192, 00:05:51.099 "immediate_data": true, 00:05:51.099 "allow_duplicated_isid": false, 00:05:51.099 "error_recovery_level": 0, 00:05:51.099 "nop_timeout": 60, 00:05:51.099 "nop_in_interval": 30, 00:05:51.099 "disable_chap": false, 00:05:51.099 "require_chap": false, 00:05:51.099 "mutual_chap": false, 00:05:51.099 "chap_group": 0, 00:05:51.099 "max_large_datain_per_connection": 64, 00:05:51.099 "max_r2t_per_connection": 4, 00:05:51.099 "pdu_pool_size": 36864, 00:05:51.099 "immediate_data_pool_size": 16384, 00:05:51.099 "data_out_pool_size": 2048 00:05:51.099 } 00:05:51.099 } 00:05:51.099 ] 00:05:51.099 } 00:05:51.099 ] 00:05:51.099 } 00:05:51.099 23:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:51.099 23:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 39260 00:05:51.099 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 39260 ']' 00:05:51.099 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 39260 00:05:51.099 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:51.099 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.099 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 39260 00:05:51.099 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.099 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.099 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 39260' 00:05:51.099 killing process with pid 39260 00:05:51.099 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 39260 00:05:51.099 23:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 39260 00:05:51.665 23:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=39402 00:05:51.665 23:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:51.665 23:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:56.928 23:30:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 39402 00:05:56.928 23:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 39402 ']' 00:05:56.928 23:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 39402 00:05:56.928 23:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:56.928 23:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.928 23:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 39402 00:05:56.928 23:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.928 23:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.928 23:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 39402' 00:05:56.928 killing process with pid 39402 00:05:56.928 23:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 39402 00:05:56.928 23:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 39402 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:56.928 00:05:56.928 real 0m6.512s 00:05:56.928 user 0m6.144s 00:05:56.928 sys 0m0.725s 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.928 ************************************ 00:05:56.928 END TEST skip_rpc_with_json 00:05:56.928 ************************************ 00:05:56.928 23:30:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:56.928 23:30:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.928 23:30:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.928 23:30:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.928 ************************************ 00:05:56.928 START TEST skip_rpc_with_delay 00:05:56.928 ************************************ 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:56.928 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:57.186 [2024-11-19 23:30:31.266173] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:57.187 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:57.187 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.187 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.187 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.187 00:05:57.187 real 0m0.080s 00:05:57.187 user 0m0.050s 00:05:57.187 sys 0m0.029s 00:05:57.187 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.187 23:30:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:57.187 ************************************ 00:05:57.187 END TEST skip_rpc_with_delay 00:05:57.187 ************************************ 00:05:57.187 23:30:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:57.187 23:30:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:57.187 23:30:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:57.187 23:30:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.187 23:30:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.187 23:30:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.187 ************************************ 00:05:57.187 START TEST exit_on_failed_rpc_init 00:05:57.187 ************************************ 00:05:57.187 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:57.187 23:30:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=40118 00:05:57.187 23:30:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.187 23:30:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 40118 00:05:57.187 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 40118 ']' 00:05:57.187 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.187 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.187 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.187 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.187 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.187 [2024-11-19 23:30:31.384992] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:05:57.187 [2024-11-19 23:30:31.385113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40118 ] 00:05:57.187 [2024-11-19 23:30:31.450737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.445 [2024-11-19 23:30:31.501182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:57.703 23:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.703 [2024-11-19 23:30:31.827094] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:05:57.703 [2024-11-19 23:30:31.827182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40240 ] 00:05:57.703 [2024-11-19 23:30:31.897955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.703 [2024-11-19 23:30:31.948574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.703 [2024-11-19 23:30:31.948710] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:57.703 [2024-11-19 23:30:31.948733] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:57.703 [2024-11-19 23:30:31.948747] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.703 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:57.703 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.703 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:57.703 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:57.703 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:57.703 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.703 23:30:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:57.703 23:30:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 40118 00:05:57.703 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 40118 ']' 00:05:57.703 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 40118 00:05:57.703 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:57.961 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.961 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 40118 00:05:57.961 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.961 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.961 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 40118' 00:05:57.961 killing process with pid 40118 00:05:57.961 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 40118 00:05:57.961 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 40118 00:05:58.219 00:05:58.219 real 0m1.109s 00:05:58.219 user 0m1.205s 00:05:58.219 sys 0m0.431s 00:05:58.219 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.219 23:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:58.219 ************************************ 00:05:58.219 END TEST exit_on_failed_rpc_init 00:05:58.219 ************************************ 00:05:58.219 23:30:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:58.219 00:05:58.219 real 0m13.472s 00:05:58.219 user 0m12.677s 00:05:58.219 sys 0m1.714s 00:05:58.219 23:30:32 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.219 23:30:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.219 ************************************ 00:05:58.219 END TEST skip_rpc 00:05:58.219 ************************************ 00:05:58.219 23:30:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:58.219 23:30:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.219 23:30:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.219 23:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:58.219 ************************************ 00:05:58.219 START TEST rpc_client 00:05:58.219 ************************************ 00:05:58.219 23:30:32 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:58.478 * Looking for test storage... 00:05:58.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:58.478 23:30:32 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.478 23:30:32 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.478 23:30:32 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.478 23:30:32 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.478 23:30:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:58.478 23:30:32 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.478 23:30:32 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.478 --rc genhtml_branch_coverage=1 00:05:58.478 --rc genhtml_function_coverage=1 00:05:58.478 --rc genhtml_legend=1 00:05:58.478 --rc geninfo_all_blocks=1 00:05:58.478 --rc geninfo_unexecuted_blocks=1 00:05:58.478 00:05:58.478 ' 00:05:58.478 23:30:32 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.478 --rc genhtml_branch_coverage=1 00:05:58.478 --rc genhtml_function_coverage=1 00:05:58.478 --rc genhtml_legend=1 00:05:58.478 --rc geninfo_all_blocks=1 00:05:58.478 --rc geninfo_unexecuted_blocks=1 00:05:58.478 00:05:58.478 ' 00:05:58.478 23:30:32 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.478 --rc genhtml_branch_coverage=1 00:05:58.478 --rc genhtml_function_coverage=1 00:05:58.478 --rc genhtml_legend=1 00:05:58.478 --rc geninfo_all_blocks=1 00:05:58.478 --rc geninfo_unexecuted_blocks=1 00:05:58.478 00:05:58.478 ' 00:05:58.478 23:30:32 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.478 --rc genhtml_branch_coverage=1 00:05:58.478 --rc genhtml_function_coverage=1 00:05:58.478 --rc genhtml_legend=1 00:05:58.478 --rc geninfo_all_blocks=1 00:05:58.478 --rc geninfo_unexecuted_blocks=1 00:05:58.478 00:05:58.478 ' 00:05:58.478 23:30:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:58.478 OK 00:05:58.478 23:30:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:58.478 00:05:58.478 real 0m0.157s 00:05:58.478 user 0m0.102s 00:05:58.478 sys 0m0.064s 00:05:58.478 23:30:32 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.478 23:30:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:58.478 ************************************ 00:05:58.478 END TEST rpc_client 00:05:58.478 ************************************ 00:05:58.478 23:30:32 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:58.478 23:30:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.478 23:30:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.478 23:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:58.478 ************************************ 00:05:58.478 START TEST json_config 00:05:58.478 ************************************ 00:05:58.478 23:30:32 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:58.478 23:30:32 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.478 23:30:32 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.478 23:30:32 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.737 23:30:32 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.737 23:30:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.737 23:30:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.737 23:30:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.737 23:30:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.737 23:30:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.737 23:30:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.737 23:30:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.737 23:30:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.737 23:30:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.737 23:30:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.737 23:30:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.737 23:30:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:58.737 23:30:32 json_config -- scripts/common.sh@345 -- # : 1 00:05:58.737 23:30:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.737 23:30:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.737 23:30:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:58.737 23:30:32 json_config -- scripts/common.sh@353 -- # local d=1 00:05:58.737 23:30:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.737 23:30:32 json_config -- scripts/common.sh@355 -- # echo 1 00:05:58.737 23:30:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.737 23:30:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:58.737 23:30:32 json_config -- scripts/common.sh@353 -- # local d=2 00:05:58.737 23:30:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.737 23:30:32 json_config -- scripts/common.sh@355 -- # echo 2 00:05:58.737 23:30:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.737 23:30:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.737 23:30:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.737 23:30:32 json_config -- scripts/common.sh@368 -- # return 0 00:05:58.737 23:30:32 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.737 23:30:32 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.737 --rc genhtml_branch_coverage=1 00:05:58.737 --rc genhtml_function_coverage=1 00:05:58.737 --rc genhtml_legend=1 00:05:58.737 --rc geninfo_all_blocks=1 00:05:58.737 --rc geninfo_unexecuted_blocks=1 00:05:58.737 00:05:58.737 ' 00:05:58.737 23:30:32 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.737 --rc genhtml_branch_coverage=1 00:05:58.737 --rc genhtml_function_coverage=1 00:05:58.737 --rc genhtml_legend=1 00:05:58.737 --rc geninfo_all_blocks=1 00:05:58.737 --rc geninfo_unexecuted_blocks=1 00:05:58.737 00:05:58.737 ' 00:05:58.737 23:30:32 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.737 --rc genhtml_branch_coverage=1 00:05:58.737 --rc genhtml_function_coverage=1 00:05:58.737 --rc genhtml_legend=1 00:05:58.737 --rc geninfo_all_blocks=1 00:05:58.737 --rc geninfo_unexecuted_blocks=1 00:05:58.737 00:05:58.737 ' 00:05:58.737 23:30:32 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.737 --rc genhtml_branch_coverage=1 00:05:58.737 --rc genhtml_function_coverage=1 00:05:58.737 --rc genhtml_legend=1 00:05:58.737 --rc geninfo_all_blocks=1 00:05:58.737 --rc geninfo_unexecuted_blocks=1 00:05:58.737 00:05:58.737 ' 00:05:58.737 23:30:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.737 23:30:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.737 23:30:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.737 23:30:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.737 23:30:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.737 23:30:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.737 23:30:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.737 23:30:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.737 23:30:32 json_config -- paths/export.sh@5 -- # export PATH 00:05:58.737 23:30:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@51 -- # : 0 00:05:58.737 23:30:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:58.738 23:30:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:58.738 23:30:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.738 23:30:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.738 23:30:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.738 23:30:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:58.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:58.738 23:30:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:58.738 23:30:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:58.738 23:30:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:58.738 INFO: JSON configuration test init 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:58.738 23:30:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.738 23:30:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:58.738 23:30:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.738 23:30:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.738 23:30:32 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:58.738 23:30:32 json_config -- json_config/common.sh@9 -- # local app=target 00:05:58.738 23:30:32 json_config -- json_config/common.sh@10 -- # shift 00:05:58.738 23:30:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:58.738 23:30:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:58.738 23:30:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:58.738 23:30:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.738 23:30:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.738 23:30:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=40498 00:05:58.738 23:30:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:58.738 23:30:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:58.738 Waiting for target to run... 00:05:58.738 23:30:32 json_config -- json_config/common.sh@25 -- # waitforlisten 40498 /var/tmp/spdk_tgt.sock 00:05:58.738 23:30:32 json_config -- common/autotest_common.sh@835 -- # '[' -z 40498 ']' 00:05:58.738 23:30:32 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.738 23:30:32 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.738 23:30:32 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.738 23:30:32 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.738 23:30:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.738 [2024-11-19 23:30:32.932144] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:05:58.738 [2024-11-19 23:30:32.932235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40498 ] 00:05:59.304 [2024-11-19 23:30:33.308233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.304 [2024-11-19 23:30:33.343046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.869 23:30:33 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.869 23:30:33 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:59.869 23:30:33 json_config -- json_config/common.sh@26 -- # echo '' 00:05:59.869 00:05:59.869 23:30:33 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:59.869 23:30:33 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:59.869 23:30:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.869 23:30:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.869 23:30:33 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:59.869 23:30:33 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:59.869 23:30:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:59.869 23:30:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.869 23:30:33 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:59.869 23:30:33 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:59.869 23:30:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:03.153 23:30:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.153 23:30:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:03.153 23:30:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@54 -- # sort 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:03.153 23:30:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.153 23:30:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:03.153 23:30:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.153 23:30:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:03.153 23:30:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.153 23:30:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.411 MallocForNvmf0 00:06:03.669 23:30:37 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.669 23:30:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.669 MallocForNvmf1 00:06:03.927 23:30:37 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:03.927 23:30:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:03.927 [2024-11-19 23:30:38.232755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.184 23:30:38 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.184 23:30:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.442 23:30:38 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:04.442 23:30:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:04.699 23:30:38 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:04.700 23:30:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:04.958 23:30:39 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:04.958 23:30:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:05.216 [2024-11-19 23:30:39.316395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:05.216 23:30:39 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:05.216 23:30:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.216 23:30:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.216 23:30:39 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:05.216 23:30:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.216 23:30:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.216 23:30:39 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:05.216 23:30:39 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:05.216 23:30:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:05.473 MallocBdevForConfigChangeCheck 00:06:05.473 23:30:39 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:05.473 23:30:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.473 23:30:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.473 23:30:39 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:05.473 23:30:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:06.039 23:30:40 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:06.039 INFO: shutting down applications... 00:06:06.039 23:30:40 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:06.039 23:30:40 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:06.039 23:30:40 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:06.039 23:30:40 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:07.938 Calling clear_iscsi_subsystem 00:06:07.938 Calling clear_nvmf_subsystem 00:06:07.938 Calling clear_nbd_subsystem 00:06:07.938 Calling clear_ublk_subsystem 00:06:07.938 Calling clear_vhost_blk_subsystem 00:06:07.938 Calling clear_vhost_scsi_subsystem 00:06:07.938 Calling clear_bdev_subsystem 00:06:07.938 23:30:41 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:07.938 23:30:41 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:07.938 23:30:41 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:07.938 23:30:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.938 23:30:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:07.938 23:30:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:07.938 23:30:42 json_config -- json_config/json_config.sh@352 -- # break 00:06:07.938 23:30:42 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:07.938 23:30:42 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:07.938 23:30:42 json_config -- json_config/common.sh@31 -- # local app=target 00:06:07.938 23:30:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:07.938 23:30:42 json_config -- json_config/common.sh@35 -- # [[ -n 40498 ]] 00:06:07.938 23:30:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 40498 00:06:07.938 23:30:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:07.938 23:30:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.938 23:30:42 json_config -- json_config/common.sh@41 -- # kill -0 40498 00:06:07.938 23:30:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.504 23:30:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.504 23:30:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.504 23:30:42 json_config -- json_config/common.sh@41 -- # kill -0 40498 00:06:08.504 23:30:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.504 23:30:42 json_config -- json_config/common.sh@43 -- # break 00:06:08.504 23:30:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.504 23:30:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.504 SPDK target shutdown done 00:06:08.504 23:30:42 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:08.504 INFO: relaunching applications... 00:06:08.504 23:30:42 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.504 23:30:42 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.504 23:30:42 json_config -- json_config/common.sh@10 -- # shift 00:06:08.504 23:30:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.504 23:30:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.504 23:30:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.504 23:30:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.504 23:30:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.504 23:30:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=41706 00:06:08.504 23:30:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.504 23:30:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.504 Waiting for target to run... 00:06:08.504 23:30:42 json_config -- json_config/common.sh@25 -- # waitforlisten 41706 /var/tmp/spdk_tgt.sock 00:06:08.504 23:30:42 json_config -- common/autotest_common.sh@835 -- # '[' -z 41706 ']' 00:06:08.504 23:30:42 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.504 23:30:42 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.504 23:30:42 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.504 23:30:42 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.504 23:30:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.504 [2024-11-19 23:30:42.730874] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:08.504 [2024-11-19 23:30:42.730987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41706 ] 00:06:09.070 [2024-11-19 23:30:43.246246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.070 [2024-11-19 23:30:43.289779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.352 [2024-11-19 23:30:46.349920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.352 [2024-11-19 23:30:46.382427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:12.352 23:30:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.352 23:30:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:12.352 23:30:46 json_config -- json_config/common.sh@26 -- # echo '' 00:06:12.352 00:06:12.352 23:30:46 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:12.352 23:30:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:12.352 INFO: Checking if target configuration is the same... 00:06:12.352 23:30:46 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.352 23:30:46 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:12.352 23:30:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.352 + '[' 2 -ne 2 ']' 00:06:12.352 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:12.352 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:12.352 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:12.352 +++ basename /dev/fd/62 00:06:12.353 ++ mktemp /tmp/62.XXX 00:06:12.353 + tmp_file_1=/tmp/62.EbU 00:06:12.353 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.353 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.353 + tmp_file_2=/tmp/spdk_tgt_config.json.vwu 00:06:12.353 + ret=0 00:06:12.353 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.611 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.611 + diff -u /tmp/62.EbU /tmp/spdk_tgt_config.json.vwu 00:06:12.611 + echo 'INFO: JSON config files are the same' 00:06:12.611 INFO: JSON config files are the same 00:06:12.611 + rm /tmp/62.EbU /tmp/spdk_tgt_config.json.vwu 00:06:12.611 + exit 0 00:06:12.611 23:30:46 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:12.611 23:30:46 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:12.611 INFO: changing configuration and checking if this can be detected... 00:06:12.611 23:30:46 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.611 23:30:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.869 23:30:47 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.869 23:30:47 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:12.869 23:30:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.869 + '[' 2 -ne 2 ']' 00:06:12.869 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:12.869 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:12.869 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:12.869 +++ basename /dev/fd/62 00:06:12.869 ++ mktemp /tmp/62.XXX 00:06:12.869 + tmp_file_1=/tmp/62.XBU 00:06:12.869 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.869 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.869 + tmp_file_2=/tmp/spdk_tgt_config.json.rg0 00:06:12.869 + ret=0 00:06:12.869 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:13.433 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:13.433 + diff -u /tmp/62.XBU /tmp/spdk_tgt_config.json.rg0 00:06:13.433 + ret=1 00:06:13.433 + echo '=== Start of file: /tmp/62.XBU ===' 00:06:13.433 + cat /tmp/62.XBU 00:06:13.433 + echo '=== End of file: /tmp/62.XBU ===' 00:06:13.433 + echo '' 00:06:13.434 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rg0 ===' 00:06:13.434 + cat /tmp/spdk_tgt_config.json.rg0 00:06:13.434 + echo '=== End of file: /tmp/spdk_tgt_config.json.rg0 ===' 00:06:13.434 + echo '' 00:06:13.434 + rm /tmp/62.XBU /tmp/spdk_tgt_config.json.rg0 00:06:13.434 + exit 1 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:13.434 INFO: configuration change detected. 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@324 -- # [[ -n 41706 ]] 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.434 23:30:47 json_config -- json_config/json_config.sh@330 -- # killprocess 41706 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@954 -- # '[' -z 41706 ']' 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@958 -- # kill -0 41706 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@959 -- # uname 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 41706 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 41706' 00:06:13.434 killing process with pid 41706 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@973 -- # kill 41706 00:06:13.434 23:30:47 json_config -- common/autotest_common.sh@978 -- # wait 41706 00:06:15.397 23:30:49 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.397 23:30:49 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:15.397 23:30:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.397 23:30:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.397 23:30:49 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:15.397 23:30:49 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:15.397 INFO: Success 00:06:15.397 00:06:15.397 real 0m16.629s 00:06:15.397 user 0m18.819s 00:06:15.397 sys 0m2.081s 00:06:15.397 23:30:49 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.397 23:30:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.397 ************************************ 00:06:15.397 END TEST json_config 00:06:15.397 ************************************ 00:06:15.397 23:30:49 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:15.397 23:30:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.397 23:30:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.397 23:30:49 -- common/autotest_common.sh@10 -- # set +x 00:06:15.397 ************************************ 00:06:15.397 START TEST json_config_extra_key 00:06:15.397 ************************************ 00:06:15.397 23:30:49 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:15.397 23:30:49 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.397 23:30:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.397 23:30:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.397 23:30:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.397 23:30:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:15.397 23:30:49 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.397 23:30:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.397 --rc genhtml_branch_coverage=1 00:06:15.397 --rc genhtml_function_coverage=1 00:06:15.397 --rc genhtml_legend=1 00:06:15.397 --rc geninfo_all_blocks=1 00:06:15.397 --rc geninfo_unexecuted_blocks=1 00:06:15.397 00:06:15.397 ' 00:06:15.397 23:30:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.398 --rc genhtml_branch_coverage=1 00:06:15.398 --rc genhtml_function_coverage=1 00:06:15.398 --rc genhtml_legend=1 00:06:15.398 --rc geninfo_all_blocks=1 00:06:15.398 --rc geninfo_unexecuted_blocks=1 00:06:15.398 00:06:15.398 ' 00:06:15.398 23:30:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.398 --rc genhtml_branch_coverage=1 00:06:15.398 --rc genhtml_function_coverage=1 00:06:15.398 --rc genhtml_legend=1 00:06:15.398 --rc geninfo_all_blocks=1 00:06:15.398 --rc geninfo_unexecuted_blocks=1 00:06:15.398 00:06:15.398 ' 00:06:15.398 23:30:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.398 --rc genhtml_branch_coverage=1 00:06:15.398 --rc genhtml_function_coverage=1 00:06:15.398 --rc genhtml_legend=1 00:06:15.398 --rc geninfo_all_blocks=1 00:06:15.398 --rc geninfo_unexecuted_blocks=1 00:06:15.398 00:06:15.398 ' 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.398 23:30:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.398 23:30:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.398 23:30:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.398 23:30:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.398 23:30:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.398 23:30:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.398 23:30:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.398 23:30:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:15.398 23:30:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.398 23:30:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:15.398 INFO: launching applications... 00:06:15.398 23:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:15.398 23:30:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:15.398 23:30:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:15.398 23:30:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:15.398 23:30:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:15.398 23:30:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:15.398 23:30:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.398 23:30:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.398 23:30:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=42632 00:06:15.398 23:30:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:15.398 23:30:49 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:15.398 Waiting for target to run... 00:06:15.398 23:30:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 42632 /var/tmp/spdk_tgt.sock 00:06:15.398 23:30:49 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 42632 ']' 00:06:15.398 23:30:49 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:15.398 23:30:49 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.398 23:30:49 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:15.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:15.398 23:30:49 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.398 23:30:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.398 [2024-11-19 23:30:49.611948] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:15.398 [2024-11-19 23:30:49.612041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42632 ] 00:06:15.964 [2024-11-19 23:30:50.150701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.964 [2024-11-19 23:30:50.196791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.530 23:30:50 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.530 23:30:50 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:16.530 23:30:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:16.530 00:06:16.530 23:30:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:16.530 INFO: shutting down applications... 00:06:16.530 23:30:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:16.530 23:30:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:16.530 23:30:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:16.530 23:30:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 42632 ]] 00:06:16.530 23:30:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 42632 00:06:16.530 23:30:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:16.530 23:30:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.530 23:30:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 42632 00:06:16.530 23:30:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:17.097 23:30:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:17.097 23:30:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.097 23:30:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 42632 00:06:17.097 23:30:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:17.097 23:30:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:17.097 23:30:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:17.097 23:30:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:17.097 SPDK target shutdown done 00:06:17.097 23:30:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:17.097 Success 00:06:17.097 00:06:17.097 real 0m1.719s 00:06:17.097 user 0m1.496s 00:06:17.097 sys 0m0.678s 00:06:17.097 23:30:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.097 23:30:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:17.097 ************************************ 00:06:17.097 END TEST json_config_extra_key 00:06:17.097 ************************************ 00:06:17.097 23:30:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:17.097 23:30:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.097 23:30:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.097 23:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.097 ************************************ 00:06:17.097 START TEST alias_rpc 00:06:17.097 ************************************ 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:17.097 * Looking for test storage... 00:06:17.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.097 23:30:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.097 --rc genhtml_branch_coverage=1 00:06:17.097 --rc genhtml_function_coverage=1 00:06:17.097 --rc genhtml_legend=1 00:06:17.097 --rc geninfo_all_blocks=1 00:06:17.097 --rc geninfo_unexecuted_blocks=1 00:06:17.097 00:06:17.097 ' 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.097 --rc genhtml_branch_coverage=1 00:06:17.097 --rc genhtml_function_coverage=1 00:06:17.097 --rc genhtml_legend=1 00:06:17.097 --rc geninfo_all_blocks=1 00:06:17.097 --rc geninfo_unexecuted_blocks=1 00:06:17.097 00:06:17.097 ' 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.097 --rc genhtml_branch_coverage=1 00:06:17.097 --rc genhtml_function_coverage=1 00:06:17.097 --rc genhtml_legend=1 00:06:17.097 --rc geninfo_all_blocks=1 00:06:17.097 --rc geninfo_unexecuted_blocks=1 00:06:17.097 00:06:17.097 ' 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.097 --rc genhtml_branch_coverage=1 00:06:17.097 --rc genhtml_function_coverage=1 00:06:17.097 --rc genhtml_legend=1 00:06:17.097 --rc geninfo_all_blocks=1 00:06:17.097 --rc geninfo_unexecuted_blocks=1 00:06:17.097 00:06:17.097 ' 00:06:17.097 23:30:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.097 23:30:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=42946 00:06:17.097 23:30:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:17.097 23:30:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 42946 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 42946 ']' 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.097 23:30:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.098 [2024-11-19 23:30:51.370008] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:17.098 [2024-11-19 23:30:51.370149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42946 ] 00:06:17.356 [2024-11-19 23:30:51.441334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.356 [2024-11-19 23:30:51.490964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.614 23:30:51 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.614 23:30:51 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.614 23:30:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:17.872 23:30:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 42946 00:06:17.872 23:30:52 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 42946 ']' 00:06:17.872 23:30:52 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 42946 00:06:17.872 23:30:52 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:17.872 23:30:52 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.872 23:30:52 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 42946 00:06:17.872 23:30:52 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.872 23:30:52 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.872 23:30:52 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 42946' 00:06:17.872 killing process with pid 42946 00:06:17.872 23:30:52 alias_rpc -- common/autotest_common.sh@973 -- # kill 42946 00:06:17.872 23:30:52 alias_rpc -- common/autotest_common.sh@978 -- # wait 42946 00:06:18.438 00:06:18.438 real 0m1.365s 00:06:18.438 user 0m1.498s 00:06:18.438 sys 0m0.470s 00:06:18.438 23:30:52 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.438 23:30:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.438 ************************************ 00:06:18.438 END TEST alias_rpc 00:06:18.438 ************************************ 00:06:18.438 23:30:52 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:18.438 23:30:52 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:18.438 23:30:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.438 23:30:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.438 23:30:52 -- common/autotest_common.sh@10 -- # set +x 00:06:18.438 ************************************ 00:06:18.438 START TEST spdkcli_tcp 00:06:18.438 ************************************ 00:06:18.438 23:30:52 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:18.438 * Looking for test storage... 00:06:18.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:18.438 23:30:52 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.438 23:30:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.438 23:30:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.438 23:30:52 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.438 23:30:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.439 23:30:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:18.439 23:30:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.439 23:30:52 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.439 --rc genhtml_branch_coverage=1 00:06:18.439 --rc genhtml_function_coverage=1 00:06:18.439 --rc genhtml_legend=1 00:06:18.439 --rc geninfo_all_blocks=1 00:06:18.439 --rc geninfo_unexecuted_blocks=1 00:06:18.439 00:06:18.439 ' 00:06:18.439 23:30:52 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.439 --rc genhtml_branch_coverage=1 00:06:18.439 --rc genhtml_function_coverage=1 00:06:18.439 --rc genhtml_legend=1 00:06:18.439 --rc geninfo_all_blocks=1 00:06:18.439 --rc geninfo_unexecuted_blocks=1 00:06:18.439 00:06:18.439 ' 00:06:18.439 23:30:52 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.439 --rc genhtml_branch_coverage=1 00:06:18.439 --rc genhtml_function_coverage=1 00:06:18.439 --rc genhtml_legend=1 00:06:18.439 --rc geninfo_all_blocks=1 00:06:18.439 --rc geninfo_unexecuted_blocks=1 00:06:18.439 00:06:18.439 ' 00:06:18.439 23:30:52 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.439 --rc genhtml_branch_coverage=1 00:06:18.439 --rc genhtml_function_coverage=1 00:06:18.439 --rc genhtml_legend=1 00:06:18.439 --rc geninfo_all_blocks=1 00:06:18.439 --rc geninfo_unexecuted_blocks=1 00:06:18.439 00:06:18.439 ' 00:06:18.439 23:30:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:18.439 23:30:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:18.439 23:30:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:18.439 23:30:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:18.439 23:30:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:18.439 23:30:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:18.697 23:30:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:18.697 23:30:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.697 23:30:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.697 23:30:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=43147 00:06:18.697 23:30:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:18.697 23:30:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 43147 00:06:18.697 23:30:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 43147 ']' 00:06:18.697 23:30:52 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.697 23:30:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.697 23:30:52 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.697 23:30:52 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.697 23:30:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.697 [2024-11-19 23:30:52.805893] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:18.697 [2024-11-19 23:30:52.805978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43147 ] 00:06:18.697 [2024-11-19 23:30:52.877479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.697 [2024-11-19 23:30:52.930515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.697 [2024-11-19 23:30:52.930520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.955 23:30:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.955 23:30:53 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:18.955 23:30:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=43271 00:06:18.955 23:30:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:18.955 23:30:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:19.214 [ 00:06:19.214 "bdev_malloc_delete", 00:06:19.214 "bdev_malloc_create", 00:06:19.214 "bdev_null_resize", 00:06:19.214 "bdev_null_delete", 00:06:19.214 "bdev_null_create", 00:06:19.214 "bdev_nvme_cuse_unregister", 00:06:19.214 "bdev_nvme_cuse_register", 00:06:19.214 "bdev_opal_new_user", 00:06:19.214 "bdev_opal_set_lock_state", 00:06:19.214 "bdev_opal_delete", 00:06:19.214 "bdev_opal_get_info", 00:06:19.214 "bdev_opal_create", 00:06:19.214 "bdev_nvme_opal_revert", 00:06:19.214 "bdev_nvme_opal_init", 00:06:19.214 "bdev_nvme_send_cmd", 00:06:19.214 "bdev_nvme_set_keys", 00:06:19.214 "bdev_nvme_get_path_iostat", 00:06:19.214 "bdev_nvme_get_mdns_discovery_info", 00:06:19.214 "bdev_nvme_stop_mdns_discovery", 00:06:19.214 "bdev_nvme_start_mdns_discovery", 00:06:19.214 "bdev_nvme_set_multipath_policy", 00:06:19.214 "bdev_nvme_set_preferred_path", 00:06:19.214 "bdev_nvme_get_io_paths", 00:06:19.214 "bdev_nvme_remove_error_injection", 00:06:19.214 "bdev_nvme_add_error_injection", 00:06:19.214 "bdev_nvme_get_discovery_info", 00:06:19.214 "bdev_nvme_stop_discovery", 00:06:19.214 "bdev_nvme_start_discovery", 00:06:19.214 "bdev_nvme_get_controller_health_info", 00:06:19.214 "bdev_nvme_disable_controller", 00:06:19.214 "bdev_nvme_enable_controller", 00:06:19.214 "bdev_nvme_reset_controller", 00:06:19.214 "bdev_nvme_get_transport_statistics", 00:06:19.214 "bdev_nvme_apply_firmware", 00:06:19.214 "bdev_nvme_detach_controller", 00:06:19.214 "bdev_nvme_get_controllers", 00:06:19.214 "bdev_nvme_attach_controller", 00:06:19.214 "bdev_nvme_set_hotplug", 00:06:19.214 "bdev_nvme_set_options", 00:06:19.214 "bdev_passthru_delete", 00:06:19.214 "bdev_passthru_create", 00:06:19.214 "bdev_lvol_set_parent_bdev", 00:06:19.214 "bdev_lvol_set_parent", 00:06:19.214 "bdev_lvol_check_shallow_copy", 00:06:19.214 "bdev_lvol_start_shallow_copy", 00:06:19.214 "bdev_lvol_grow_lvstore", 00:06:19.214 "bdev_lvol_get_lvols", 00:06:19.214 "bdev_lvol_get_lvstores", 00:06:19.214 "bdev_lvol_delete", 00:06:19.214 "bdev_lvol_set_read_only", 00:06:19.214 "bdev_lvol_resize", 00:06:19.214 "bdev_lvol_decouple_parent", 00:06:19.214 "bdev_lvol_inflate", 00:06:19.214 "bdev_lvol_rename", 00:06:19.214 "bdev_lvol_clone_bdev", 00:06:19.214 "bdev_lvol_clone", 00:06:19.214 "bdev_lvol_snapshot", 00:06:19.214 "bdev_lvol_create", 00:06:19.214 "bdev_lvol_delete_lvstore", 00:06:19.214 "bdev_lvol_rename_lvstore", 00:06:19.214 "bdev_lvol_create_lvstore", 00:06:19.214 "bdev_raid_set_options", 00:06:19.214 "bdev_raid_remove_base_bdev", 00:06:19.214 "bdev_raid_add_base_bdev", 00:06:19.214 "bdev_raid_delete", 00:06:19.214 "bdev_raid_create", 00:06:19.214 "bdev_raid_get_bdevs", 00:06:19.214 "bdev_error_inject_error", 00:06:19.214 "bdev_error_delete", 00:06:19.214 "bdev_error_create", 00:06:19.214 "bdev_split_delete", 00:06:19.214 "bdev_split_create", 00:06:19.214 "bdev_delay_delete", 00:06:19.214 "bdev_delay_create", 00:06:19.214 "bdev_delay_update_latency", 00:06:19.214 "bdev_zone_block_delete", 00:06:19.214 "bdev_zone_block_create", 00:06:19.214 "blobfs_create", 00:06:19.214 "blobfs_detect", 00:06:19.214 "blobfs_set_cache_size", 00:06:19.214 "bdev_aio_delete", 00:06:19.214 "bdev_aio_rescan", 00:06:19.214 "bdev_aio_create", 00:06:19.214 "bdev_ftl_set_property", 00:06:19.214 "bdev_ftl_get_properties", 00:06:19.214 "bdev_ftl_get_stats", 00:06:19.214 "bdev_ftl_unmap", 00:06:19.214 "bdev_ftl_unload", 00:06:19.214 "bdev_ftl_delete", 00:06:19.214 "bdev_ftl_load", 00:06:19.214 "bdev_ftl_create", 00:06:19.214 "bdev_virtio_attach_controller", 00:06:19.214 "bdev_virtio_scsi_get_devices", 00:06:19.214 "bdev_virtio_detach_controller", 00:06:19.214 "bdev_virtio_blk_set_hotplug", 00:06:19.214 "bdev_iscsi_delete", 00:06:19.214 "bdev_iscsi_create", 00:06:19.214 "bdev_iscsi_set_options", 00:06:19.214 "accel_error_inject_error", 00:06:19.214 "ioat_scan_accel_module", 00:06:19.214 "dsa_scan_accel_module", 00:06:19.214 "iaa_scan_accel_module", 00:06:19.214 "vfu_virtio_create_fs_endpoint", 00:06:19.214 "vfu_virtio_create_scsi_endpoint", 00:06:19.214 "vfu_virtio_scsi_remove_target", 00:06:19.214 "vfu_virtio_scsi_add_target", 00:06:19.214 "vfu_virtio_create_blk_endpoint", 00:06:19.214 "vfu_virtio_delete_endpoint", 00:06:19.214 "keyring_file_remove_key", 00:06:19.214 "keyring_file_add_key", 00:06:19.214 "keyring_linux_set_options", 00:06:19.214 "fsdev_aio_delete", 00:06:19.214 "fsdev_aio_create", 00:06:19.214 "iscsi_get_histogram", 00:06:19.214 "iscsi_enable_histogram", 00:06:19.214 "iscsi_set_options", 00:06:19.214 "iscsi_get_auth_groups", 00:06:19.214 "iscsi_auth_group_remove_secret", 00:06:19.214 "iscsi_auth_group_add_secret", 00:06:19.214 "iscsi_delete_auth_group", 00:06:19.214 "iscsi_create_auth_group", 00:06:19.214 "iscsi_set_discovery_auth", 00:06:19.214 "iscsi_get_options", 00:06:19.214 "iscsi_target_node_request_logout", 00:06:19.214 "iscsi_target_node_set_redirect", 00:06:19.214 "iscsi_target_node_set_auth", 00:06:19.214 "iscsi_target_node_add_lun", 00:06:19.214 "iscsi_get_stats", 00:06:19.214 "iscsi_get_connections", 00:06:19.214 "iscsi_portal_group_set_auth", 00:06:19.214 "iscsi_start_portal_group", 00:06:19.214 "iscsi_delete_portal_group", 00:06:19.214 "iscsi_create_portal_group", 00:06:19.214 "iscsi_get_portal_groups", 00:06:19.214 "iscsi_delete_target_node", 00:06:19.214 "iscsi_target_node_remove_pg_ig_maps", 00:06:19.214 "iscsi_target_node_add_pg_ig_maps", 00:06:19.214 "iscsi_create_target_node", 00:06:19.214 "iscsi_get_target_nodes", 00:06:19.214 "iscsi_delete_initiator_group", 00:06:19.214 "iscsi_initiator_group_remove_initiators", 00:06:19.214 "iscsi_initiator_group_add_initiators", 00:06:19.214 "iscsi_create_initiator_group", 00:06:19.214 "iscsi_get_initiator_groups", 00:06:19.214 "nvmf_set_crdt", 00:06:19.214 "nvmf_set_config", 00:06:19.214 "nvmf_set_max_subsystems", 00:06:19.214 "nvmf_stop_mdns_prr", 00:06:19.214 "nvmf_publish_mdns_prr", 00:06:19.214 "nvmf_subsystem_get_listeners", 00:06:19.214 "nvmf_subsystem_get_qpairs", 00:06:19.214 "nvmf_subsystem_get_controllers", 00:06:19.214 "nvmf_get_stats", 00:06:19.214 "nvmf_get_transports", 00:06:19.214 "nvmf_create_transport", 00:06:19.214 "nvmf_get_targets", 00:06:19.214 "nvmf_delete_target", 00:06:19.214 "nvmf_create_target", 00:06:19.214 "nvmf_subsystem_allow_any_host", 00:06:19.214 "nvmf_subsystem_set_keys", 00:06:19.214 "nvmf_subsystem_remove_host", 00:06:19.214 "nvmf_subsystem_add_host", 00:06:19.214 "nvmf_ns_remove_host", 00:06:19.214 "nvmf_ns_add_host", 00:06:19.215 "nvmf_subsystem_remove_ns", 00:06:19.215 "nvmf_subsystem_set_ns_ana_group", 00:06:19.215 "nvmf_subsystem_add_ns", 00:06:19.215 "nvmf_subsystem_listener_set_ana_state", 00:06:19.215 "nvmf_discovery_get_referrals", 00:06:19.215 "nvmf_discovery_remove_referral", 00:06:19.215 "nvmf_discovery_add_referral", 00:06:19.215 "nvmf_subsystem_remove_listener", 00:06:19.215 "nvmf_subsystem_add_listener", 00:06:19.215 "nvmf_delete_subsystem", 00:06:19.215 "nvmf_create_subsystem", 00:06:19.215 "nvmf_get_subsystems", 00:06:19.215 "env_dpdk_get_mem_stats", 00:06:19.215 "nbd_get_disks", 00:06:19.215 "nbd_stop_disk", 00:06:19.215 "nbd_start_disk", 00:06:19.215 "ublk_recover_disk", 00:06:19.215 "ublk_get_disks", 00:06:19.215 "ublk_stop_disk", 00:06:19.215 "ublk_start_disk", 00:06:19.215 "ublk_destroy_target", 00:06:19.215 "ublk_create_target", 00:06:19.215 "virtio_blk_create_transport", 00:06:19.215 "virtio_blk_get_transports", 00:06:19.215 "vhost_controller_set_coalescing", 00:06:19.215 "vhost_get_controllers", 00:06:19.215 "vhost_delete_controller", 00:06:19.215 "vhost_create_blk_controller", 00:06:19.215 "vhost_scsi_controller_remove_target", 00:06:19.215 "vhost_scsi_controller_add_target", 00:06:19.215 "vhost_start_scsi_controller", 00:06:19.215 "vhost_create_scsi_controller", 00:06:19.215 "thread_set_cpumask", 00:06:19.215 "scheduler_set_options", 00:06:19.215 "framework_get_governor", 00:06:19.215 "framework_get_scheduler", 00:06:19.215 "framework_set_scheduler", 00:06:19.215 "framework_get_reactors", 00:06:19.215 "thread_get_io_channels", 00:06:19.215 "thread_get_pollers", 00:06:19.215 "thread_get_stats", 00:06:19.215 "framework_monitor_context_switch", 00:06:19.215 "spdk_kill_instance", 00:06:19.215 "log_enable_timestamps", 00:06:19.215 "log_get_flags", 00:06:19.215 "log_clear_flag", 00:06:19.215 "log_set_flag", 00:06:19.215 "log_get_level", 00:06:19.215 "log_set_level", 00:06:19.215 "log_get_print_level", 00:06:19.215 "log_set_print_level", 00:06:19.215 "framework_enable_cpumask_locks", 00:06:19.215 "framework_disable_cpumask_locks", 00:06:19.215 "framework_wait_init", 00:06:19.215 "framework_start_init", 00:06:19.215 "scsi_get_devices", 00:06:19.215 "bdev_get_histogram", 00:06:19.215 "bdev_enable_histogram", 00:06:19.215 "bdev_set_qos_limit", 00:06:19.215 "bdev_set_qd_sampling_period", 00:06:19.215 "bdev_get_bdevs", 00:06:19.215 "bdev_reset_iostat", 00:06:19.215 "bdev_get_iostat", 00:06:19.215 "bdev_examine", 00:06:19.215 "bdev_wait_for_examine", 00:06:19.215 "bdev_set_options", 00:06:19.215 "accel_get_stats", 00:06:19.215 "accel_set_options", 00:06:19.215 "accel_set_driver", 00:06:19.215 "accel_crypto_key_destroy", 00:06:19.215 "accel_crypto_keys_get", 00:06:19.215 "accel_crypto_key_create", 00:06:19.215 "accel_assign_opc", 00:06:19.215 "accel_get_module_info", 00:06:19.215 "accel_get_opc_assignments", 00:06:19.215 "vmd_rescan", 00:06:19.215 "vmd_remove_device", 00:06:19.215 "vmd_enable", 00:06:19.215 "sock_get_default_impl", 00:06:19.215 "sock_set_default_impl", 00:06:19.215 "sock_impl_set_options", 00:06:19.215 "sock_impl_get_options", 00:06:19.215 "iobuf_get_stats", 00:06:19.215 "iobuf_set_options", 00:06:19.215 "keyring_get_keys", 00:06:19.215 "vfu_tgt_set_base_path", 00:06:19.215 "framework_get_pci_devices", 00:06:19.215 "framework_get_config", 00:06:19.215 "framework_get_subsystems", 00:06:19.215 "fsdev_set_opts", 00:06:19.215 "fsdev_get_opts", 00:06:19.215 "trace_get_info", 00:06:19.215 "trace_get_tpoint_group_mask", 00:06:19.215 "trace_disable_tpoint_group", 00:06:19.215 "trace_enable_tpoint_group", 00:06:19.215 "trace_clear_tpoint_mask", 00:06:19.215 "trace_set_tpoint_mask", 00:06:19.215 "notify_get_notifications", 00:06:19.215 "notify_get_types", 00:06:19.215 "spdk_get_version", 00:06:19.215 "rpc_get_methods" 00:06:19.215 ] 00:06:19.215 23:30:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:19.215 23:30:53 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.215 23:30:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.215 23:30:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:19.215 23:30:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 43147 00:06:19.215 23:30:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 43147 ']' 00:06:19.215 23:30:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 43147 00:06:19.215 23:30:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:19.215 23:30:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.215 23:30:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 43147 00:06:19.474 23:30:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.474 23:30:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.474 23:30:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 43147' 00:06:19.474 killing process with pid 43147 00:06:19.474 23:30:53 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 43147 00:06:19.474 23:30:53 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 43147 00:06:19.732 00:06:19.732 real 0m1.331s 00:06:19.732 user 0m2.380s 00:06:19.732 sys 0m0.492s 00:06:19.732 23:30:53 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.732 23:30:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.732 ************************************ 00:06:19.732 END TEST spdkcli_tcp 00:06:19.732 ************************************ 00:06:19.732 23:30:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.732 23:30:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.732 23:30:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.732 23:30:53 -- common/autotest_common.sh@10 -- # set +x 00:06:19.732 ************************************ 00:06:19.732 START TEST dpdk_mem_utility 00:06:19.732 ************************************ 00:06:19.732 23:30:53 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.732 * Looking for test storage... 00:06:19.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:19.732 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:19.732 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:19.732 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.990 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.990 23:30:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:19.990 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.990 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.990 --rc genhtml_branch_coverage=1 00:06:19.990 --rc genhtml_function_coverage=1 00:06:19.990 --rc genhtml_legend=1 00:06:19.990 --rc geninfo_all_blocks=1 00:06:19.990 --rc geninfo_unexecuted_blocks=1 00:06:19.990 00:06:19.990 ' 00:06:19.990 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.990 --rc genhtml_branch_coverage=1 00:06:19.990 --rc genhtml_function_coverage=1 00:06:19.990 --rc genhtml_legend=1 00:06:19.990 --rc geninfo_all_blocks=1 00:06:19.990 --rc geninfo_unexecuted_blocks=1 00:06:19.990 00:06:19.990 ' 00:06:19.990 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.990 --rc genhtml_branch_coverage=1 00:06:19.990 --rc genhtml_function_coverage=1 00:06:19.990 --rc genhtml_legend=1 00:06:19.990 --rc geninfo_all_blocks=1 00:06:19.990 --rc geninfo_unexecuted_blocks=1 00:06:19.990 00:06:19.990 ' 00:06:19.990 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.990 --rc genhtml_branch_coverage=1 00:06:19.990 --rc genhtml_function_coverage=1 00:06:19.990 --rc genhtml_legend=1 00:06:19.990 --rc geninfo_all_blocks=1 00:06:19.990 --rc geninfo_unexecuted_blocks=1 00:06:19.990 00:06:19.990 ' 00:06:19.990 23:30:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:19.990 23:30:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=43371 00:06:19.990 23:30:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.990 23:30:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 43371 00:06:19.990 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 43371 ']' 00:06:19.990 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.990 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.990 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.990 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.990 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.990 [2024-11-19 23:30:54.157912] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:19.990 [2024-11-19 23:30:54.157998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43371 ] 00:06:19.990 [2024-11-19 23:30:54.223735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.990 [2024-11-19 23:30:54.268995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.249 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.249 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:20.249 23:30:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:20.249 23:30:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:20.249 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.249 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.249 { 00:06:20.249 "filename": "/tmp/spdk_mem_dump.txt" 00:06:20.249 } 00:06:20.249 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.249 23:30:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:20.507 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:20.507 1 heaps totaling size 810.000000 MiB 00:06:20.507 size: 810.000000 MiB heap id: 0 00:06:20.507 end heaps---------- 00:06:20.507 9 mempools totaling size 595.772034 MiB 00:06:20.508 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:20.508 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:20.508 size: 92.545471 MiB name: bdev_io_43371 00:06:20.508 size: 50.003479 MiB name: msgpool_43371 00:06:20.508 size: 36.509338 MiB name: fsdev_io_43371 00:06:20.508 size: 21.763794 MiB name: PDU_Pool 00:06:20.508 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:20.508 size: 4.133484 MiB name: evtpool_43371 00:06:20.508 size: 0.026123 MiB name: Session_Pool 00:06:20.508 end mempools------- 00:06:20.508 6 memzones totaling size 4.142822 MiB 00:06:20.508 size: 1.000366 MiB name: RG_ring_0_43371 00:06:20.508 size: 1.000366 MiB name: RG_ring_1_43371 00:06:20.508 size: 1.000366 MiB name: RG_ring_4_43371 00:06:20.508 size: 1.000366 MiB name: RG_ring_5_43371 00:06:20.508 size: 0.125366 MiB name: RG_ring_2_43371 00:06:20.508 size: 0.015991 MiB name: RG_ring_3_43371 00:06:20.508 end memzones------- 00:06:20.508 23:30:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:20.508 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:20.508 list of free elements. size: 10.862488 MiB 00:06:20.508 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:20.508 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:20.508 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:20.508 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:20.508 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:20.508 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:20.508 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:20.508 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:20.508 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:20.508 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:20.508 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:20.508 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:20.508 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:20.508 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:20.508 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:20.508 list of standard malloc elements. size: 199.218628 MiB 00:06:20.508 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:20.508 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:20.508 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:20.508 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:20.508 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:20.508 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:20.508 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:20.508 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:20.508 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:20.508 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:20.508 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:20.508 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:20.508 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:20.508 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:20.508 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:20.508 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:20.508 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:20.508 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:20.508 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:20.508 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:20.508 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:20.508 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:20.508 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:20.508 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:20.508 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:20.508 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:20.508 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:20.508 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:20.508 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:20.508 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:20.508 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:20.508 list of memzone associated elements. size: 599.918884 MiB 00:06:20.508 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:20.508 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:20.508 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:20.508 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:20.508 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:20.508 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_43371_0 00:06:20.508 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:20.508 associated memzone info: size: 48.002930 MiB name: MP_msgpool_43371_0 00:06:20.508 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:20.508 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_43371_0 00:06:20.508 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:20.508 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:20.508 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:20.508 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:20.508 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:20.508 associated memzone info: size: 3.000122 MiB name: MP_evtpool_43371_0 00:06:20.508 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:20.508 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_43371 00:06:20.508 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:20.508 associated memzone info: size: 1.007996 MiB name: MP_evtpool_43371 00:06:20.508 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:20.508 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:20.508 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:20.508 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:20.508 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:20.508 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:20.508 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:20.508 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:20.508 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:20.508 associated memzone info: size: 1.000366 MiB name: RG_ring_0_43371 00:06:20.508 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:20.508 associated memzone info: size: 1.000366 MiB name: RG_ring_1_43371 00:06:20.508 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:20.508 associated memzone info: size: 1.000366 MiB name: RG_ring_4_43371 00:06:20.508 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:20.508 associated memzone info: size: 1.000366 MiB name: RG_ring_5_43371 00:06:20.508 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:20.508 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_43371 00:06:20.508 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:20.508 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_43371 00:06:20.508 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:20.508 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:20.508 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:20.508 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:20.508 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:20.508 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:20.508 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:20.508 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_43371 00:06:20.508 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:20.508 associated memzone info: size: 0.125366 MiB name: RG_ring_2_43371 00:06:20.508 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:20.508 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:20.508 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:20.508 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:20.508 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:20.508 associated memzone info: size: 0.015991 MiB name: RG_ring_3_43371 00:06:20.508 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:20.508 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:20.508 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:20.508 associated memzone info: size: 0.000183 MiB name: MP_msgpool_43371 00:06:20.508 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:20.508 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_43371 00:06:20.508 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:20.508 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_43371 00:06:20.508 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:20.508 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:20.508 23:30:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:20.508 23:30:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 43371 00:06:20.508 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 43371 ']' 00:06:20.508 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 43371 00:06:20.508 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:20.508 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.508 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 43371 00:06:20.508 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.508 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.508 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 43371' 00:06:20.508 killing process with pid 43371 00:06:20.508 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 43371 00:06:20.508 23:30:54 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 43371 00:06:21.074 00:06:21.074 real 0m1.122s 00:06:21.074 user 0m1.090s 00:06:21.074 sys 0m0.439s 00:06:21.074 23:30:55 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.074 23:30:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:21.074 ************************************ 00:06:21.074 END TEST dpdk_mem_utility 00:06:21.074 ************************************ 00:06:21.074 23:30:55 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:21.074 23:30:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.074 23:30:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.074 23:30:55 -- common/autotest_common.sh@10 -- # set +x 00:06:21.074 ************************************ 00:06:21.074 START TEST event 00:06:21.074 ************************************ 00:06:21.074 23:30:55 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:21.074 * Looking for test storage... 00:06:21.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:21.074 23:30:55 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.074 23:30:55 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.074 23:30:55 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.074 23:30:55 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.074 23:30:55 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.074 23:30:55 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.074 23:30:55 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.074 23:30:55 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.074 23:30:55 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.074 23:30:55 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.074 23:30:55 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.074 23:30:55 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.074 23:30:55 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.074 23:30:55 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.074 23:30:55 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.074 23:30:55 event -- scripts/common.sh@344 -- # case "$op" in 00:06:21.074 23:30:55 event -- scripts/common.sh@345 -- # : 1 00:06:21.074 23:30:55 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.074 23:30:55 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.075 23:30:55 event -- scripts/common.sh@365 -- # decimal 1 00:06:21.075 23:30:55 event -- scripts/common.sh@353 -- # local d=1 00:06:21.075 23:30:55 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.075 23:30:55 event -- scripts/common.sh@355 -- # echo 1 00:06:21.075 23:30:55 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.075 23:30:55 event -- scripts/common.sh@366 -- # decimal 2 00:06:21.075 23:30:55 event -- scripts/common.sh@353 -- # local d=2 00:06:21.075 23:30:55 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.075 23:30:55 event -- scripts/common.sh@355 -- # echo 2 00:06:21.075 23:30:55 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.075 23:30:55 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.075 23:30:55 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.075 23:30:55 event -- scripts/common.sh@368 -- # return 0 00:06:21.075 23:30:55 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.075 23:30:55 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.075 --rc genhtml_branch_coverage=1 00:06:21.075 --rc genhtml_function_coverage=1 00:06:21.075 --rc genhtml_legend=1 00:06:21.075 --rc geninfo_all_blocks=1 00:06:21.075 --rc geninfo_unexecuted_blocks=1 00:06:21.075 00:06:21.075 ' 00:06:21.075 23:30:55 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.075 --rc genhtml_branch_coverage=1 00:06:21.075 --rc genhtml_function_coverage=1 00:06:21.075 --rc genhtml_legend=1 00:06:21.075 --rc geninfo_all_blocks=1 00:06:21.075 --rc geninfo_unexecuted_blocks=1 00:06:21.075 00:06:21.075 ' 00:06:21.075 23:30:55 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.075 --rc genhtml_branch_coverage=1 00:06:21.075 --rc genhtml_function_coverage=1 00:06:21.075 --rc genhtml_legend=1 00:06:21.075 --rc geninfo_all_blocks=1 00:06:21.075 --rc geninfo_unexecuted_blocks=1 00:06:21.075 00:06:21.075 ' 00:06:21.075 23:30:55 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.075 --rc genhtml_branch_coverage=1 00:06:21.075 --rc genhtml_function_coverage=1 00:06:21.075 --rc genhtml_legend=1 00:06:21.075 --rc geninfo_all_blocks=1 00:06:21.075 --rc geninfo_unexecuted_blocks=1 00:06:21.075 00:06:21.075 ' 00:06:21.075 23:30:55 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:21.075 23:30:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:21.075 23:30:55 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:21.075 23:30:55 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:21.075 23:30:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.075 23:30:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.075 ************************************ 00:06:21.075 START TEST event_perf 00:06:21.075 ************************************ 00:06:21.075 23:30:55 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:21.075 Running I/O for 1 seconds...[2024-11-19 23:30:55.300654] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:21.075 [2024-11-19 23:30:55.300730] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43659 ] 00:06:21.075 [2024-11-19 23:30:55.374063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.333 [2024-11-19 23:30:55.430008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.333 [2024-11-19 23:30:55.430061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.333 [2024-11-19 23:30:55.430138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.333 [2024-11-19 23:30:55.430142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.265 Running I/O for 1 seconds... 00:06:22.265 lcore 0: 232694 00:06:22.265 lcore 1: 232693 00:06:22.265 lcore 2: 232694 00:06:22.265 lcore 3: 232693 00:06:22.265 done. 00:06:22.265 00:06:22.265 real 0m1.191s 00:06:22.265 user 0m4.108s 00:06:22.265 sys 0m0.079s 00:06:22.265 23:30:56 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.265 23:30:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.265 ************************************ 00:06:22.265 END TEST event_perf 00:06:22.265 ************************************ 00:06:22.265 23:30:56 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:22.265 23:30:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:22.265 23:30:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.265 23:30:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.265 ************************************ 00:06:22.265 START TEST event_reactor 00:06:22.265 ************************************ 00:06:22.265 23:30:56 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:22.265 [2024-11-19 23:30:56.538649] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:22.265 [2024-11-19 23:30:56.538716] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43835 ] 00:06:22.523 [2024-11-19 23:30:56.610764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.523 [2024-11-19 23:30:56.660844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.458 test_start 00:06:23.458 oneshot 00:06:23.458 tick 100 00:06:23.458 tick 100 00:06:23.458 tick 250 00:06:23.458 tick 100 00:06:23.458 tick 100 00:06:23.458 tick 100 00:06:23.458 tick 250 00:06:23.458 tick 500 00:06:23.458 tick 100 00:06:23.458 tick 100 00:06:23.458 tick 250 00:06:23.458 tick 100 00:06:23.458 tick 100 00:06:23.458 test_end 00:06:23.458 00:06:23.458 real 0m1.182s 00:06:23.458 user 0m1.106s 00:06:23.458 sys 0m0.072s 00:06:23.458 23:30:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.458 23:30:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:23.458 ************************************ 00:06:23.458 END TEST event_reactor 00:06:23.458 ************************************ 00:06:23.458 23:30:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.458 23:30:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:23.458 23:30:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.458 23:30:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.458 ************************************ 00:06:23.458 START TEST event_reactor_perf 00:06:23.458 ************************************ 00:06:23.458 23:30:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.458 [2024-11-19 23:30:57.768017] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:23.458 [2024-11-19 23:30:57.768114] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43987 ] 00:06:23.716 [2024-11-19 23:30:57.840535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.716 [2024-11-19 23:30:57.890783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.649 test_start 00:06:24.649 test_end 00:06:24.649 Performance: 354619 events per second 00:06:24.649 00:06:24.649 real 0m1.184s 00:06:24.649 user 0m1.109s 00:06:24.649 sys 0m0.070s 00:06:24.649 23:30:58 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.649 23:30:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.649 ************************************ 00:06:24.649 END TEST event_reactor_perf 00:06:24.649 ************************************ 00:06:24.908 23:30:58 event -- event/event.sh@49 -- # uname -s 00:06:24.908 23:30:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:24.908 23:30:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:24.908 23:30:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.908 23:30:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.908 23:30:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.908 ************************************ 00:06:24.908 START TEST event_scheduler 00:06:24.908 ************************************ 00:06:24.908 23:30:58 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:24.908 * Looking for test storage... 00:06:24.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.908 23:30:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.908 --rc genhtml_branch_coverage=1 00:06:24.908 --rc genhtml_function_coverage=1 00:06:24.908 --rc genhtml_legend=1 00:06:24.908 --rc geninfo_all_blocks=1 00:06:24.908 --rc geninfo_unexecuted_blocks=1 00:06:24.908 00:06:24.908 ' 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.908 --rc genhtml_branch_coverage=1 00:06:24.908 --rc genhtml_function_coverage=1 00:06:24.908 --rc genhtml_legend=1 00:06:24.908 --rc geninfo_all_blocks=1 00:06:24.908 --rc geninfo_unexecuted_blocks=1 00:06:24.908 00:06:24.908 ' 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.908 --rc genhtml_branch_coverage=1 00:06:24.908 --rc genhtml_function_coverage=1 00:06:24.908 --rc genhtml_legend=1 00:06:24.908 --rc geninfo_all_blocks=1 00:06:24.908 --rc geninfo_unexecuted_blocks=1 00:06:24.908 00:06:24.908 ' 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.908 --rc genhtml_branch_coverage=1 00:06:24.908 --rc genhtml_function_coverage=1 00:06:24.908 --rc genhtml_legend=1 00:06:24.908 --rc geninfo_all_blocks=1 00:06:24.908 --rc geninfo_unexecuted_blocks=1 00:06:24.908 00:06:24.908 ' 00:06:24.908 23:30:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:24.908 23:30:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=44175 00:06:24.908 23:30:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:24.908 23:30:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.908 23:30:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 44175 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 44175 ']' 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.908 23:30:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.909 [2024-11-19 23:30:59.178118] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:24.909 [2024-11-19 23:30:59.178205] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44175 ] 00:06:25.166 [2024-11-19 23:30:59.244956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.166 [2024-11-19 23:30:59.297035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.166 [2024-11-19 23:30:59.297098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.166 [2024-11-19 23:30:59.297171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.166 [2024-11-19 23:30:59.297175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.166 23:30:59 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.166 23:30:59 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:25.166 23:30:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:25.166 23:30:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.166 23:30:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.166 [2024-11-19 23:30:59.430185] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:25.166 [2024-11-19 23:30:59.430212] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:25.166 [2024-11-19 23:30:59.430231] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:25.166 [2024-11-19 23:30:59.430243] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:25.166 [2024-11-19 23:30:59.430260] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:25.166 23:30:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.166 23:30:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:25.166 23:30:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.166 23:30:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 [2024-11-19 23:30:59.523706] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:25.424 23:30:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.424 23:30:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:25.424 23:30:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.424 23:30:59 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.424 23:30:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 ************************************ 00:06:25.424 START TEST scheduler_create_thread 00:06:25.424 ************************************ 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 2 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 3 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 4 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 5 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 6 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 7 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 8 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 9 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 10 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:25.424 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:25.425 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.425 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.425 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.425 23:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:25.425 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.425 23:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.990 23:31:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.990 23:31:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:25.991 23:31:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:25.991 23:31:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.991 23:31:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.364 23:31:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.364 00:06:27.364 real 0m1.751s 00:06:27.364 user 0m0.009s 00:06:27.364 sys 0m0.007s 00:06:27.364 23:31:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.364 23:31:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.364 ************************************ 00:06:27.364 END TEST scheduler_create_thread 00:06:27.364 ************************************ 00:06:27.364 23:31:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:27.364 23:31:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 44175 00:06:27.364 23:31:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 44175 ']' 00:06:27.364 23:31:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 44175 00:06:27.364 23:31:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:27.364 23:31:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.364 23:31:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 44175 00:06:27.364 23:31:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:27.364 23:31:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:27.364 23:31:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 44175' 00:06:27.364 killing process with pid 44175 00:06:27.364 23:31:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 44175 00:06:27.364 23:31:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 44175 00:06:27.622 [2024-11-19 23:31:01.779406] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:27.881 00:06:27.881 real 0m2.982s 00:06:27.881 user 0m4.061s 00:06:27.881 sys 0m0.369s 00:06:27.881 23:31:01 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.881 23:31:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.881 ************************************ 00:06:27.881 END TEST event_scheduler 00:06:27.881 ************************************ 00:06:27.881 23:31:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:27.881 23:31:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:27.881 23:31:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.881 23:31:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.881 23:31:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.881 ************************************ 00:06:27.881 START TEST app_repeat 00:06:27.881 ************************************ 00:06:27.881 23:31:02 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=44521 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 44521' 00:06:27.881 Process app_repeat pid: 44521 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:27.881 spdk_app_start Round 0 00:06:27.881 23:31:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 44521 /var/tmp/spdk-nbd.sock 00:06:27.881 23:31:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 44521 ']' 00:06:27.881 23:31:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.881 23:31:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.881 23:31:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.881 23:31:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.881 23:31:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.881 [2024-11-19 23:31:02.052886] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:27.881 [2024-11-19 23:31:02.052963] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44521 ] 00:06:27.881 [2024-11-19 23:31:02.128830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.881 [2024-11-19 23:31:02.183097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.881 [2024-11-19 23:31:02.183117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.139 23:31:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.139 23:31:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:28.139 23:31:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.397 Malloc0 00:06:28.397 23:31:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.656 Malloc1 00:06:28.656 23:31:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.656 23:31:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.222 /dev/nbd0 00:06:29.222 23:31:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.222 23:31:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.222 1+0 records in 00:06:29.222 1+0 records out 00:06:29.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238094 s, 17.2 MB/s 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.222 23:31:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.222 23:31:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.222 23:31:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.222 23:31:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.479 /dev/nbd1 00:06:29.479 23:31:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.479 23:31:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.479 1+0 records in 00:06:29.479 1+0 records out 00:06:29.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191654 s, 21.4 MB/s 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.479 23:31:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.480 23:31:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.480 23:31:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.480 23:31:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.480 23:31:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.480 23:31:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.738 { 00:06:29.738 "nbd_device": "/dev/nbd0", 00:06:29.738 "bdev_name": "Malloc0" 00:06:29.738 }, 00:06:29.738 { 00:06:29.738 "nbd_device": "/dev/nbd1", 00:06:29.738 "bdev_name": "Malloc1" 00:06:29.738 } 00:06:29.738 ]' 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.738 { 00:06:29.738 "nbd_device": "/dev/nbd0", 00:06:29.738 "bdev_name": "Malloc0" 00:06:29.738 }, 00:06:29.738 { 00:06:29.738 "nbd_device": "/dev/nbd1", 00:06:29.738 "bdev_name": "Malloc1" 00:06:29.738 } 00:06:29.738 ]' 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.738 /dev/nbd1' 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.738 /dev/nbd1' 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.738 256+0 records in 00:06:29.738 256+0 records out 00:06:29.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476502 s, 220 MB/s 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.738 23:31:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.738 256+0 records in 00:06:29.738 256+0 records out 00:06:29.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228043 s, 46.0 MB/s 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.739 256+0 records in 00:06:29.739 256+0 records out 00:06:29.739 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244168 s, 42.9 MB/s 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.739 23:31:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.996 23:31:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.996 23:31:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.996 23:31:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.996 23:31:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.996 23:31:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.996 23:31:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.996 23:31:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.996 23:31:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.996 23:31:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.996 23:31:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.561 23:31:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.819 23:31:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.819 23:31:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.819 23:31:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.819 23:31:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.819 23:31:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.819 23:31:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.819 23:31:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.819 23:31:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.819 23:31:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.819 23:31:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.077 23:31:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:31.077 [2024-11-19 23:31:05.377523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.334 [2024-11-19 23:31:05.425276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.335 [2024-11-19 23:31:05.425276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.335 [2024-11-19 23:31:05.483165] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.335 [2024-11-19 23:31:05.483223] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.614 23:31:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.614 23:31:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:34.614 spdk_app_start Round 1 00:06:34.614 23:31:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 44521 /var/tmp/spdk-nbd.sock 00:06:34.614 23:31:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 44521 ']' 00:06:34.614 23:31:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.614 23:31:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.614 23:31:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.614 23:31:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.614 23:31:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.614 23:31:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.614 23:31:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:34.614 23:31:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.614 Malloc0 00:06:34.614 23:31:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.872 Malloc1 00:06:34.872 23:31:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.872 23:31:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.130 /dev/nbd0 00:06:35.130 23:31:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.130 23:31:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.130 1+0 records in 00:06:35.130 1+0 records out 00:06:35.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178891 s, 22.9 MB/s 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:35.130 23:31:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:35.130 23:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.130 23:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.130 23:31:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:35.388 /dev/nbd1 00:06:35.388 23:31:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.388 23:31:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.388 1+0 records in 00:06:35.388 1+0 records out 00:06:35.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192627 s, 21.3 MB/s 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:35.388 23:31:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:35.388 23:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.388 23:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.388 23:31:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.388 23:31:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.388 23:31:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.645 23:31:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.645 { 00:06:35.645 "nbd_device": "/dev/nbd0", 00:06:35.645 "bdev_name": "Malloc0" 00:06:35.645 }, 00:06:35.645 { 00:06:35.645 "nbd_device": "/dev/nbd1", 00:06:35.645 "bdev_name": "Malloc1" 00:06:35.645 } 00:06:35.645 ]' 00:06:35.645 23:31:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.645 { 00:06:35.645 "nbd_device": "/dev/nbd0", 00:06:35.645 "bdev_name": "Malloc0" 00:06:35.645 }, 00:06:35.645 { 00:06:35.645 "nbd_device": "/dev/nbd1", 00:06:35.645 "bdev_name": "Malloc1" 00:06:35.645 } 00:06:35.645 ]' 00:06:35.645 23:31:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.904 /dev/nbd1' 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.904 /dev/nbd1' 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.904 256+0 records in 00:06:35.904 256+0 records out 00:06:35.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048237 s, 217 MB/s 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.904 23:31:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.904 256+0 records in 00:06:35.904 256+0 records out 00:06:35.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229885 s, 45.6 MB/s 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.904 256+0 records in 00:06:35.904 256+0 records out 00:06:35.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248352 s, 42.2 MB/s 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.904 23:31:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.162 23:31:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.162 23:31:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.162 23:31:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.162 23:31:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.162 23:31:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.162 23:31:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.162 23:31:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.162 23:31:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.162 23:31:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.162 23:31:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.420 23:31:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.420 23:31:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.420 23:31:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.420 23:31:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.420 23:31:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.420 23:31:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.420 23:31:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.420 23:31:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.420 23:31:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.420 23:31:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.420 23:31:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.679 23:31:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.679 23:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.679 23:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.679 23:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.679 23:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.679 23:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.679 23:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:36.679 23:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.679 23:31:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.679 23:31:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.679 23:31:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.679 23:31:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.679 23:31:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.245 23:31:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.245 [2024-11-19 23:31:11.482002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.245 [2024-11-19 23:31:11.529854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.245 [2024-11-19 23:31:11.529855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.503 [2024-11-19 23:31:11.591645] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.503 [2024-11-19 23:31:11.591712] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.027 23:31:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.027 23:31:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:40.027 spdk_app_start Round 2 00:06:40.027 23:31:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 44521 /var/tmp/spdk-nbd.sock 00:06:40.027 23:31:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 44521 ']' 00:06:40.027 23:31:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.027 23:31:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.027 23:31:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.027 23:31:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.027 23:31:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.285 23:31:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.285 23:31:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:40.285 23:31:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.543 Malloc0 00:06:40.543 23:31:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.801 Malloc1 00:06:41.092 23:31:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.092 23:31:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.373 /dev/nbd0 00:06:41.373 23:31:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.373 23:31:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.373 1+0 records in 00:06:41.373 1+0 records out 00:06:41.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160915 s, 25.5 MB/s 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.373 23:31:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.373 23:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.373 23:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.373 23:31:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.631 /dev/nbd1 00:06:41.631 23:31:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.631 23:31:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.631 1+0 records in 00:06:41.631 1+0 records out 00:06:41.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180929 s, 22.6 MB/s 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.631 23:31:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.631 23:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.631 23:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.631 23:31:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.631 23:31:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.631 23:31:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.889 23:31:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.889 { 00:06:41.889 "nbd_device": "/dev/nbd0", 00:06:41.889 "bdev_name": "Malloc0" 00:06:41.889 }, 00:06:41.889 { 00:06:41.889 "nbd_device": "/dev/nbd1", 00:06:41.889 "bdev_name": "Malloc1" 00:06:41.889 } 00:06:41.889 ]' 00:06:41.889 23:31:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.889 { 00:06:41.889 "nbd_device": "/dev/nbd0", 00:06:41.889 "bdev_name": "Malloc0" 00:06:41.890 }, 00:06:41.890 { 00:06:41.890 "nbd_device": "/dev/nbd1", 00:06:41.890 "bdev_name": "Malloc1" 00:06:41.890 } 00:06:41.890 ]' 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.890 /dev/nbd1' 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.890 /dev/nbd1' 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.890 256+0 records in 00:06:41.890 256+0 records out 00:06:41.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515287 s, 203 MB/s 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.890 256+0 records in 00:06:41.890 256+0 records out 00:06:41.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197988 s, 53.0 MB/s 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.890 23:31:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.148 256+0 records in 00:06:42.148 256+0 records out 00:06:42.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238742 s, 43.9 MB/s 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.148 23:31:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.405 23:31:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.405 23:31:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.405 23:31:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.405 23:31:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.406 23:31:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.406 23:31:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.406 23:31:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.406 23:31:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.406 23:31:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.406 23:31:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.663 23:31:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.663 23:31:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.663 23:31:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.663 23:31:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.663 23:31:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.663 23:31:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.663 23:31:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.663 23:31:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.663 23:31:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.663 23:31:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.663 23:31:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.922 23:31:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.922 23:31:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.922 23:31:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.922 23:31:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.922 23:31:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.922 23:31:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.922 23:31:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.922 23:31:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.922 23:31:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.922 23:31:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.922 23:31:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.922 23:31:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.922 23:31:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.179 23:31:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.438 [2024-11-19 23:31:17.609158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.438 [2024-11-19 23:31:17.656845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.438 [2024-11-19 23:31:17.656850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.438 [2024-11-19 23:31:17.718908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.438 [2024-11-19 23:31:17.718978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.717 23:31:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 44521 /var/tmp/spdk-nbd.sock 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 44521 ']' 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:46.717 23:31:20 event.app_repeat -- event/event.sh@39 -- # killprocess 44521 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 44521 ']' 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 44521 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 44521 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 44521' 00:06:46.717 killing process with pid 44521 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 44521 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 44521 00:06:46.717 spdk_app_start is called in Round 0. 00:06:46.717 Shutdown signal received, stop current app iteration 00:06:46.717 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 reinitialization... 00:06:46.717 spdk_app_start is called in Round 1. 00:06:46.717 Shutdown signal received, stop current app iteration 00:06:46.717 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 reinitialization... 00:06:46.717 spdk_app_start is called in Round 2. 00:06:46.717 Shutdown signal received, stop current app iteration 00:06:46.717 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 reinitialization... 00:06:46.717 spdk_app_start is called in Round 3. 00:06:46.717 Shutdown signal received, stop current app iteration 00:06:46.717 23:31:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:46.717 23:31:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:46.717 00:06:46.717 real 0m18.868s 00:06:46.717 user 0m41.875s 00:06:46.717 sys 0m3.231s 00:06:46.717 23:31:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.718 23:31:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.718 ************************************ 00:06:46.718 END TEST app_repeat 00:06:46.718 ************************************ 00:06:46.718 23:31:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:46.718 23:31:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:46.718 23:31:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.718 23:31:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.718 23:31:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.718 ************************************ 00:06:46.718 START TEST cpu_locks 00:06:46.718 ************************************ 00:06:46.718 23:31:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:46.718 * Looking for test storage... 00:06:46.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:46.718 23:31:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.718 23:31:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.718 23:31:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.976 23:31:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.976 23:31:21 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:46.976 23:31:21 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.976 23:31:21 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.976 --rc genhtml_branch_coverage=1 00:06:46.976 --rc genhtml_function_coverage=1 00:06:46.976 --rc genhtml_legend=1 00:06:46.976 --rc geninfo_all_blocks=1 00:06:46.976 --rc geninfo_unexecuted_blocks=1 00:06:46.976 00:06:46.976 ' 00:06:46.976 23:31:21 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.976 --rc genhtml_branch_coverage=1 00:06:46.976 --rc genhtml_function_coverage=1 00:06:46.976 --rc genhtml_legend=1 00:06:46.976 --rc geninfo_all_blocks=1 00:06:46.976 --rc geninfo_unexecuted_blocks=1 00:06:46.976 00:06:46.976 ' 00:06:46.977 23:31:21 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.977 --rc genhtml_branch_coverage=1 00:06:46.977 --rc genhtml_function_coverage=1 00:06:46.977 --rc genhtml_legend=1 00:06:46.977 --rc geninfo_all_blocks=1 00:06:46.977 --rc geninfo_unexecuted_blocks=1 00:06:46.977 00:06:46.977 ' 00:06:46.977 23:31:21 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.977 --rc genhtml_branch_coverage=1 00:06:46.977 --rc genhtml_function_coverage=1 00:06:46.977 --rc genhtml_legend=1 00:06:46.977 --rc geninfo_all_blocks=1 00:06:46.977 --rc geninfo_unexecuted_blocks=1 00:06:46.977 00:06:46.977 ' 00:06:46.977 23:31:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:46.977 23:31:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:46.977 23:31:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:46.977 23:31:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:46.977 23:31:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.977 23:31:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.977 23:31:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.977 ************************************ 00:06:46.977 START TEST default_locks 00:06:46.977 ************************************ 00:06:46.977 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:46.977 23:31:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=46999 00:06:46.977 23:31:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.977 23:31:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 46999 00:06:46.977 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 46999 ']' 00:06:46.977 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.977 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.977 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.977 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.977 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.977 [2024-11-19 23:31:21.176046] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:46.977 [2024-11-19 23:31:21.176148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46999 ] 00:06:46.977 [2024-11-19 23:31:21.249443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.235 [2024-11-19 23:31:21.300109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.493 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.493 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:47.493 23:31:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 46999 00:06:47.493 23:31:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 46999 00:06:47.493 23:31:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.750 lslocks: write error 00:06:47.751 23:31:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 46999 00:06:47.751 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 46999 ']' 00:06:47.751 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 46999 00:06:47.751 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:47.751 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.751 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 46999 00:06:47.751 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.751 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.751 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 46999' 00:06:47.751 killing process with pid 46999 00:06:47.751 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 46999 00:06:47.751 23:31:21 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 46999 00:06:48.008 23:31:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 46999 00:06:48.008 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:48.008 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 46999 00:06:48.008 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:48.008 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.008 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:48.008 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.008 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 46999 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 46999 ']' 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (46999) - No such process 00:06:48.009 ERROR: process (pid: 46999) is no longer running 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.009 00:06:48.009 real 0m1.134s 00:06:48.009 user 0m1.085s 00:06:48.009 sys 0m0.524s 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.009 23:31:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.009 ************************************ 00:06:48.009 END TEST default_locks 00:06:48.009 ************************************ 00:06:48.009 23:31:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:48.009 23:31:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.009 23:31:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.009 23:31:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.009 ************************************ 00:06:48.009 START TEST default_locks_via_rpc 00:06:48.009 ************************************ 00:06:48.009 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:48.009 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=47191 00:06:48.009 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.009 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 47191 00:06:48.009 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 47191 ']' 00:06:48.009 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.009 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.009 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.009 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.009 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.266 [2024-11-19 23:31:22.359221] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:48.266 [2024-11-19 23:31:22.359303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47191 ] 00:06:48.266 [2024-11-19 23:31:22.430975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.266 [2024-11-19 23:31:22.483915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 47191 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 47191 00:06:48.524 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.782 23:31:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 47191 00:06:48.782 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 47191 ']' 00:06:48.782 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 47191 00:06:48.782 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:48.782 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.782 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 47191 00:06:48.782 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.782 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.782 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47191' 00:06:48.782 killing process with pid 47191 00:06:48.782 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 47191 00:06:48.782 23:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 47191 00:06:49.346 00:06:49.346 real 0m1.088s 00:06:49.346 user 0m1.084s 00:06:49.346 sys 0m0.515s 00:06:49.346 23:31:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.346 23:31:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.346 ************************************ 00:06:49.346 END TEST default_locks_via_rpc 00:06:49.346 ************************************ 00:06:49.346 23:31:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:49.346 23:31:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.346 23:31:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.346 23:31:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.346 ************************************ 00:06:49.346 START TEST non_locking_app_on_locked_coremask 00:06:49.346 ************************************ 00:06:49.346 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:49.346 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=47345 00:06:49.346 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.346 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 47345 /var/tmp/spdk.sock 00:06:49.346 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 47345 ']' 00:06:49.346 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.346 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.346 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.346 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.346 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.346 [2024-11-19 23:31:23.496573] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:49.346 [2024-11-19 23:31:23.496656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47345 ] 00:06:49.346 [2024-11-19 23:31:23.562611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.346 [2024-11-19 23:31:23.611295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.603 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.603 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:49.603 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=47452 00:06:49.603 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:49.603 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 47452 /var/tmp/spdk2.sock 00:06:49.603 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 47452 ']' 00:06:49.603 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.603 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.603 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.603 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.603 23:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.860 [2024-11-19 23:31:23.925508] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:49.860 [2024-11-19 23:31:23.925584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47452 ] 00:06:49.860 [2024-11-19 23:31:24.036132] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.860 [2024-11-19 23:31:24.036166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.860 [2024-11-19 23:31:24.133529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.791 23:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.791 23:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:50.791 23:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 47345 00:06:50.791 23:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 47345 00:06:50.791 23:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.049 lslocks: write error 00:06:51.049 23:31:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 47345 00:06:51.049 23:31:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 47345 ']' 00:06:51.049 23:31:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 47345 00:06:51.049 23:31:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:51.049 23:31:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.049 23:31:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 47345 00:06:51.049 23:31:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.049 23:31:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.049 23:31:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47345' 00:06:51.049 killing process with pid 47345 00:06:51.049 23:31:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 47345 00:06:51.049 23:31:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 47345 00:06:51.983 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 47452 00:06:51.983 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 47452 ']' 00:06:51.983 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 47452 00:06:51.983 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:51.983 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.983 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 47452 00:06:51.983 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.983 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.983 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47452' 00:06:51.983 killing process with pid 47452 00:06:51.983 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 47452 00:06:51.983 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 47452 00:06:52.242 00:06:52.242 real 0m3.045s 00:06:52.242 user 0m3.277s 00:06:52.242 sys 0m0.991s 00:06:52.242 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.242 23:31:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.242 ************************************ 00:06:52.242 END TEST non_locking_app_on_locked_coremask 00:06:52.242 ************************************ 00:06:52.242 23:31:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:52.242 23:31:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.242 23:31:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.242 23:31:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.242 ************************************ 00:06:52.242 START TEST locking_app_on_unlocked_coremask 00:06:52.242 ************************************ 00:06:52.242 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:52.242 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=47752 00:06:52.242 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:52.242 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 47752 /var/tmp/spdk.sock 00:06:52.242 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 47752 ']' 00:06:52.242 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.242 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.242 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.242 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.242 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.500 [2024-11-19 23:31:26.592158] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:52.500 [2024-11-19 23:31:26.592252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47752 ] 00:06:52.500 [2024-11-19 23:31:26.664198] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.500 [2024-11-19 23:31:26.664240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.500 [2024-11-19 23:31:26.711801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.759 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.759 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:52.759 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=47885 00:06:52.759 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.759 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 47885 /var/tmp/spdk2.sock 00:06:52.759 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 47885 ']' 00:06:52.759 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.759 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.759 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.759 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.759 23:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.759 [2024-11-19 23:31:27.037835] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:52.759 [2024-11-19 23:31:27.037939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47885 ] 00:06:53.017 [2024-11-19 23:31:27.152169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.017 [2024-11-19 23:31:27.253132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.950 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.950 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:53.950 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 47885 00:06:53.950 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 47885 00:06:53.950 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.208 lslocks: write error 00:06:54.208 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 47752 00:06:54.208 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 47752 ']' 00:06:54.208 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 47752 00:06:54.208 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:54.208 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.208 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 47752 00:06:54.208 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.208 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.208 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47752' 00:06:54.208 killing process with pid 47752 00:06:54.208 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 47752 00:06:54.208 23:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 47752 00:06:55.142 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 47885 00:06:55.142 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 47885 ']' 00:06:55.142 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 47885 00:06:55.142 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.142 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.142 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 47885 00:06:55.142 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.142 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.142 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47885' 00:06:55.142 killing process with pid 47885 00:06:55.142 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 47885 00:06:55.142 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 47885 00:06:55.400 00:06:55.400 real 0m3.134s 00:06:55.400 user 0m3.329s 00:06:55.400 sys 0m1.025s 00:06:55.400 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.400 23:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.400 ************************************ 00:06:55.400 END TEST locking_app_on_unlocked_coremask 00:06:55.400 ************************************ 00:06:55.400 23:31:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:55.400 23:31:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.400 23:31:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.400 23:31:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.659 ************************************ 00:06:55.659 START TEST locking_app_on_locked_coremask 00:06:55.659 ************************************ 00:06:55.659 23:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:55.659 23:31:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=48186 00:06:55.659 23:31:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.659 23:31:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 48186 /var/tmp/spdk.sock 00:06:55.659 23:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 48186 ']' 00:06:55.659 23:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.659 23:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.659 23:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.659 23:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.659 23:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.659 [2024-11-19 23:31:29.780856] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:55.659 [2024-11-19 23:31:29.780960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48186 ] 00:06:55.659 [2024-11-19 23:31:29.853288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.659 [2024-11-19 23:31:29.900554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=48199 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 48199 /var/tmp/spdk2.sock 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 48199 /var/tmp/spdk2.sock 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 48199 /var/tmp/spdk2.sock 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 48199 ']' 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.917 23:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.175 [2024-11-19 23:31:30.234674] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:56.175 [2024-11-19 23:31:30.234762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48199 ] 00:06:56.175 [2024-11-19 23:31:30.354662] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 48186 has claimed it. 00:06:56.175 [2024-11-19 23:31:30.354729] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:56.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (48199) - No such process 00:06:56.742 ERROR: process (pid: 48199) is no longer running 00:06:56.742 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.742 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:56.742 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:56.742 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.742 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.742 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.742 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 48186 00:06:56.742 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 48186 00:06:56.742 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.000 lslocks: write error 00:06:57.000 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 48186 00:06:57.000 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 48186 ']' 00:06:57.000 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 48186 00:06:57.000 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:57.000 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.000 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 48186 00:06:57.258 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.258 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.258 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48186' 00:06:57.258 killing process with pid 48186 00:06:57.258 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 48186 00:06:57.258 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 48186 00:06:57.517 00:06:57.517 real 0m1.991s 00:06:57.517 user 0m2.246s 00:06:57.517 sys 0m0.621s 00:06:57.517 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.517 23:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.517 ************************************ 00:06:57.517 END TEST locking_app_on_locked_coremask 00:06:57.517 ************************************ 00:06:57.517 23:31:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:57.517 23:31:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.517 23:31:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.517 23:31:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.517 ************************************ 00:06:57.517 START TEST locking_overlapped_coremask 00:06:57.517 ************************************ 00:06:57.517 23:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:57.517 23:31:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=48484 00:06:57.517 23:31:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:57.517 23:31:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 48484 /var/tmp/spdk.sock 00:06:57.517 23:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 48484 ']' 00:06:57.517 23:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.517 23:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.517 23:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.517 23:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.517 23:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.517 [2024-11-19 23:31:31.824107] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:57.517 [2024-11-19 23:31:31.824198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48484 ] 00:06:57.775 [2024-11-19 23:31:31.890463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.775 [2024-11-19 23:31:31.942554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.775 [2024-11-19 23:31:31.942620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.775 [2024-11-19 23:31:31.942624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=48495 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 48495 /var/tmp/spdk2.sock 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 48495 /var/tmp/spdk2.sock 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 48495 /var/tmp/spdk2.sock 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 48495 ']' 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.034 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.034 [2024-11-19 23:31:32.269611] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:58.034 [2024-11-19 23:31:32.269708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48495 ] 00:06:58.292 [2024-11-19 23:31:32.374780] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 48484 has claimed it. 00:06:58.292 [2024-11-19 23:31:32.374846] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:58.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (48495) - No such process 00:06:58.857 ERROR: process (pid: 48495) is no longer running 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 48484 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 48484 ']' 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 48484 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.857 23:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 48484 00:06:58.857 23:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.857 23:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.857 23:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48484' 00:06:58.857 killing process with pid 48484 00:06:58.857 23:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 48484 00:06:58.857 23:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 48484 00:06:59.424 00:06:59.424 real 0m1.660s 00:06:59.424 user 0m4.679s 00:06:59.424 sys 0m0.465s 00:06:59.424 23:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.424 23:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.424 ************************************ 00:06:59.424 END TEST locking_overlapped_coremask 00:06:59.424 ************************************ 00:06:59.424 23:31:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:59.424 23:31:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.424 23:31:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.424 23:31:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.424 ************************************ 00:06:59.424 START TEST locking_overlapped_coremask_via_rpc 00:06:59.424 ************************************ 00:06:59.424 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:59.424 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=48658 00:06:59.424 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:59.424 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 48658 /var/tmp/spdk.sock 00:06:59.424 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 48658 ']' 00:06:59.424 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.424 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.424 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.424 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.424 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.424 [2024-11-19 23:31:33.536581] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:59.424 [2024-11-19 23:31:33.536684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48658 ] 00:06:59.424 [2024-11-19 23:31:33.610863] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.424 [2024-11-19 23:31:33.610904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.424 [2024-11-19 23:31:33.666778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.424 [2024-11-19 23:31:33.666850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.424 [2024-11-19 23:31:33.666853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.682 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.682 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:59.682 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=48788 00:06:59.682 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:59.682 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 48788 /var/tmp/spdk2.sock 00:06:59.682 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 48788 ']' 00:06:59.682 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.682 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.682 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.682 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.682 23:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.682 [2024-11-19 23:31:33.985115] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:06:59.682 [2024-11-19 23:31:33.985214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48788 ] 00:06:59.940 [2024-11-19 23:31:34.090166] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.940 [2024-11-19 23:31:34.090205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.940 [2024-11-19 23:31:34.186492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.940 [2024-11-19 23:31:34.186557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:59.940 [2024-11-19 23:31:34.186559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:00.873 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.874 [2024-11-19 23:31:34.989178] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 48658 has claimed it. 00:07:00.874 request: 00:07:00.874 { 00:07:00.874 "method": "framework_enable_cpumask_locks", 00:07:00.874 "req_id": 1 00:07:00.874 } 00:07:00.874 Got JSON-RPC error response 00:07:00.874 response: 00:07:00.874 { 00:07:00.874 "code": -32603, 00:07:00.874 "message": "Failed to claim CPU core: 2" 00:07:00.874 } 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 48658 /var/tmp/spdk.sock 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 48658 ']' 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.874 23:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.132 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.132 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:01.132 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 48788 /var/tmp/spdk2.sock 00:07:01.132 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 48788 ']' 00:07:01.132 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.132 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.132 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.132 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.132 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.390 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.390 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:01.390 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:01.390 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:01.390 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:01.390 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:01.390 00:07:01.390 real 0m2.052s 00:07:01.390 user 0m1.140s 00:07:01.390 sys 0m0.183s 00:07:01.390 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.390 23:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.390 ************************************ 00:07:01.390 END TEST locking_overlapped_coremask_via_rpc 00:07:01.390 ************************************ 00:07:01.390 23:31:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:01.390 23:31:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 48658 ]] 00:07:01.390 23:31:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 48658 00:07:01.390 23:31:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 48658 ']' 00:07:01.390 23:31:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 48658 00:07:01.390 23:31:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:01.390 23:31:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.390 23:31:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 48658 00:07:01.390 23:31:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.390 23:31:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.390 23:31:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48658' 00:07:01.390 killing process with pid 48658 00:07:01.390 23:31:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 48658 00:07:01.390 23:31:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 48658 00:07:01.956 23:31:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 48788 ]] 00:07:01.956 23:31:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 48788 00:07:01.956 23:31:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 48788 ']' 00:07:01.956 23:31:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 48788 00:07:01.956 23:31:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:01.956 23:31:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.956 23:31:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 48788 00:07:01.956 23:31:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:01.956 23:31:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:01.956 23:31:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48788' 00:07:01.956 killing process with pid 48788 00:07:01.956 23:31:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 48788 00:07:01.956 23:31:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 48788 00:07:02.215 23:31:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:02.215 23:31:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:02.215 23:31:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 48658 ]] 00:07:02.215 23:31:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 48658 00:07:02.215 23:31:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 48658 ']' 00:07:02.215 23:31:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 48658 00:07:02.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (48658) - No such process 00:07:02.215 23:31:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 48658 is not found' 00:07:02.215 Process with pid 48658 is not found 00:07:02.215 23:31:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 48788 ]] 00:07:02.215 23:31:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 48788 00:07:02.215 23:31:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 48788 ']' 00:07:02.215 23:31:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 48788 00:07:02.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (48788) - No such process 00:07:02.215 23:31:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 48788 is not found' 00:07:02.215 Process with pid 48788 is not found 00:07:02.215 23:31:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:02.215 00:07:02.215 real 0m15.482s 00:07:02.215 user 0m28.348s 00:07:02.215 sys 0m5.303s 00:07:02.215 23:31:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.215 23:31:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.215 ************************************ 00:07:02.215 END TEST cpu_locks 00:07:02.215 ************************************ 00:07:02.215 00:07:02.215 real 0m41.317s 00:07:02.215 user 1m20.816s 00:07:02.215 sys 0m9.368s 00:07:02.215 23:31:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.215 23:31:36 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.215 ************************************ 00:07:02.215 END TEST event 00:07:02.215 ************************************ 00:07:02.215 23:31:36 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:02.215 23:31:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.215 23:31:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.215 23:31:36 -- common/autotest_common.sh@10 -- # set +x 00:07:02.215 ************************************ 00:07:02.215 START TEST thread 00:07:02.215 ************************************ 00:07:02.215 23:31:36 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:02.474 * Looking for test storage... 00:07:02.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:02.474 23:31:36 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.474 23:31:36 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.474 23:31:36 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.474 23:31:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.474 23:31:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.474 23:31:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.474 23:31:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.474 23:31:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.474 23:31:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.474 23:31:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.474 23:31:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.474 23:31:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.474 23:31:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.474 23:31:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.474 23:31:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.474 23:31:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:02.474 23:31:36 thread -- scripts/common.sh@345 -- # : 1 00:07:02.474 23:31:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.474 23:31:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.474 23:31:36 thread -- scripts/common.sh@365 -- # decimal 1 00:07:02.474 23:31:36 thread -- scripts/common.sh@353 -- # local d=1 00:07:02.474 23:31:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.474 23:31:36 thread -- scripts/common.sh@355 -- # echo 1 00:07:02.474 23:31:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.474 23:31:36 thread -- scripts/common.sh@366 -- # decimal 2 00:07:02.474 23:31:36 thread -- scripts/common.sh@353 -- # local d=2 00:07:02.474 23:31:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.474 23:31:36 thread -- scripts/common.sh@355 -- # echo 2 00:07:02.474 23:31:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.474 23:31:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.474 23:31:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.474 23:31:36 thread -- scripts/common.sh@368 -- # return 0 00:07:02.474 23:31:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.474 23:31:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.474 --rc genhtml_branch_coverage=1 00:07:02.474 --rc genhtml_function_coverage=1 00:07:02.474 --rc genhtml_legend=1 00:07:02.474 --rc geninfo_all_blocks=1 00:07:02.474 --rc geninfo_unexecuted_blocks=1 00:07:02.474 00:07:02.474 ' 00:07:02.474 23:31:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.474 --rc genhtml_branch_coverage=1 00:07:02.474 --rc genhtml_function_coverage=1 00:07:02.474 --rc genhtml_legend=1 00:07:02.474 --rc geninfo_all_blocks=1 00:07:02.474 --rc geninfo_unexecuted_blocks=1 00:07:02.474 00:07:02.474 ' 00:07:02.474 23:31:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.474 --rc genhtml_branch_coverage=1 00:07:02.474 --rc genhtml_function_coverage=1 00:07:02.474 --rc genhtml_legend=1 00:07:02.474 --rc geninfo_all_blocks=1 00:07:02.474 --rc geninfo_unexecuted_blocks=1 00:07:02.474 00:07:02.474 ' 00:07:02.474 23:31:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.474 --rc genhtml_branch_coverage=1 00:07:02.474 --rc genhtml_function_coverage=1 00:07:02.474 --rc genhtml_legend=1 00:07:02.474 --rc geninfo_all_blocks=1 00:07:02.474 --rc geninfo_unexecuted_blocks=1 00:07:02.474 00:07:02.474 ' 00:07:02.474 23:31:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:02.474 23:31:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:02.474 23:31:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.474 23:31:36 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.474 ************************************ 00:07:02.474 START TEST thread_poller_perf 00:07:02.474 ************************************ 00:07:02.474 23:31:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:02.474 [2024-11-19 23:31:36.665233] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:07:02.474 [2024-11-19 23:31:36.665293] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49165 ] 00:07:02.474 [2024-11-19 23:31:36.739617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.733 [2024-11-19 23:31:36.789566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.733 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:03.776 [2024-11-19T22:31:38.088Z] ====================================== 00:07:03.776 [2024-11-19T22:31:38.088Z] busy:2712620645 (cyc) 00:07:03.776 [2024-11-19T22:31:38.088Z] total_run_count: 293000 00:07:03.776 [2024-11-19T22:31:38.088Z] tsc_hz: 2700000000 (cyc) 00:07:03.776 [2024-11-19T22:31:38.088Z] ====================================== 00:07:03.776 [2024-11-19T22:31:38.088Z] poller_cost: 9258 (cyc), 3428 (nsec) 00:07:03.776 00:07:03.776 real 0m1.193s 00:07:03.776 user 0m1.120s 00:07:03.776 sys 0m0.067s 00:07:03.776 23:31:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.776 23:31:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.776 ************************************ 00:07:03.776 END TEST thread_poller_perf 00:07:03.776 ************************************ 00:07:03.776 23:31:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:03.776 23:31:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:03.776 23:31:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.776 23:31:37 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.776 ************************************ 00:07:03.776 START TEST thread_poller_perf 00:07:03.776 ************************************ 00:07:03.776 23:31:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:03.776 [2024-11-19 23:31:37.905945] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:07:03.776 [2024-11-19 23:31:37.906013] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49322 ] 00:07:03.776 [2024-11-19 23:31:37.977766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.776 [2024-11-19 23:31:38.030409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.776 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:05.149 [2024-11-19T22:31:39.461Z] ====================================== 00:07:05.149 [2024-11-19T22:31:39.461Z] busy:2702476318 (cyc) 00:07:05.149 [2024-11-19T22:31:39.461Z] total_run_count: 3745000 00:07:05.149 [2024-11-19T22:31:39.461Z] tsc_hz: 2700000000 (cyc) 00:07:05.149 [2024-11-19T22:31:39.461Z] ====================================== 00:07:05.149 [2024-11-19T22:31:39.461Z] poller_cost: 721 (cyc), 267 (nsec) 00:07:05.149 00:07:05.149 real 0m1.187s 00:07:05.149 user 0m1.112s 00:07:05.149 sys 0m0.069s 00:07:05.149 23:31:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.149 23:31:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:05.149 ************************************ 00:07:05.149 END TEST thread_poller_perf 00:07:05.149 ************************************ 00:07:05.149 23:31:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:05.149 00:07:05.149 real 0m2.613s 00:07:05.149 user 0m2.359s 00:07:05.149 sys 0m0.256s 00:07:05.149 23:31:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.149 23:31:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.149 ************************************ 00:07:05.149 END TEST thread 00:07:05.149 ************************************ 00:07:05.149 23:31:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:05.149 23:31:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:05.149 23:31:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.149 23:31:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.149 23:31:39 -- common/autotest_common.sh@10 -- # set +x 00:07:05.149 ************************************ 00:07:05.149 START TEST app_cmdline 00:07:05.149 ************************************ 00:07:05.149 23:31:39 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:05.149 * Looking for test storage... 00:07:05.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.149 23:31:39 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.149 23:31:39 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.149 23:31:39 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.149 23:31:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:05.149 23:31:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.150 23:31:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:05.150 23:31:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.150 23:31:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.150 --rc genhtml_branch_coverage=1 00:07:05.150 --rc genhtml_function_coverage=1 00:07:05.150 --rc genhtml_legend=1 00:07:05.150 --rc geninfo_all_blocks=1 00:07:05.150 --rc geninfo_unexecuted_blocks=1 00:07:05.150 00:07:05.150 ' 00:07:05.150 23:31:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.150 --rc genhtml_branch_coverage=1 00:07:05.150 --rc genhtml_function_coverage=1 00:07:05.150 --rc genhtml_legend=1 00:07:05.150 --rc geninfo_all_blocks=1 00:07:05.150 --rc geninfo_unexecuted_blocks=1 00:07:05.150 00:07:05.150 ' 00:07:05.150 23:31:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.150 --rc genhtml_branch_coverage=1 00:07:05.150 --rc genhtml_function_coverage=1 00:07:05.150 --rc genhtml_legend=1 00:07:05.150 --rc geninfo_all_blocks=1 00:07:05.150 --rc geninfo_unexecuted_blocks=1 00:07:05.150 00:07:05.150 ' 00:07:05.150 23:31:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.150 --rc genhtml_branch_coverage=1 00:07:05.150 --rc genhtml_function_coverage=1 00:07:05.150 --rc genhtml_legend=1 00:07:05.150 --rc geninfo_all_blocks=1 00:07:05.150 --rc geninfo_unexecuted_blocks=1 00:07:05.150 00:07:05.150 ' 00:07:05.150 23:31:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:05.150 23:31:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=49648 00:07:05.150 23:31:39 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:05.150 23:31:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 49648 00:07:05.150 23:31:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 49648 ']' 00:07:05.150 23:31:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.150 23:31:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.150 23:31:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.150 23:31:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.150 23:31:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.150 [2024-11-19 23:31:39.358378] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:07:05.150 [2024-11-19 23:31:39.358476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49648 ] 00:07:05.150 [2024-11-19 23:31:39.430914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.407 [2024-11-19 23:31:39.482717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.665 23:31:39 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.665 23:31:39 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:05.665 23:31:39 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:05.923 { 00:07:05.923 "version": "SPDK v25.01-pre git sha1 f22e807f1", 00:07:05.923 "fields": { 00:07:05.923 "major": 25, 00:07:05.923 "minor": 1, 00:07:05.923 "patch": 0, 00:07:05.923 "suffix": "-pre", 00:07:05.923 "commit": "f22e807f1" 00:07:05.923 } 00:07:05.923 } 00:07:05.923 23:31:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:05.923 23:31:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:05.923 23:31:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:05.923 23:31:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:05.923 23:31:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.923 23:31:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.923 23:31:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.923 23:31:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:05.923 23:31:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:05.923 23:31:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:05.923 23:31:40 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:06.181 request: 00:07:06.181 { 00:07:06.181 "method": "env_dpdk_get_mem_stats", 00:07:06.181 "req_id": 1 00:07:06.181 } 00:07:06.181 Got JSON-RPC error response 00:07:06.181 response: 00:07:06.181 { 00:07:06.181 "code": -32601, 00:07:06.181 "message": "Method not found" 00:07:06.181 } 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:06.181 23:31:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 49648 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 49648 ']' 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 49648 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 49648 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 49648' 00:07:06.181 killing process with pid 49648 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 49648 00:07:06.181 23:31:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 49648 00:07:06.746 00:07:06.746 real 0m1.612s 00:07:06.746 user 0m2.013s 00:07:06.746 sys 0m0.482s 00:07:06.746 23:31:40 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.746 23:31:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.746 ************************************ 00:07:06.746 END TEST app_cmdline 00:07:06.746 ************************************ 00:07:06.746 23:31:40 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:06.746 23:31:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.746 23:31:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.746 23:31:40 -- common/autotest_common.sh@10 -- # set +x 00:07:06.746 ************************************ 00:07:06.746 START TEST version 00:07:06.746 ************************************ 00:07:06.746 23:31:40 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:06.746 * Looking for test storage... 00:07:06.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:06.746 23:31:40 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.746 23:31:40 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.746 23:31:40 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.746 23:31:40 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.746 23:31:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.746 23:31:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.746 23:31:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.746 23:31:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.746 23:31:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.746 23:31:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.746 23:31:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.746 23:31:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.746 23:31:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.746 23:31:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.746 23:31:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.746 23:31:40 version -- scripts/common.sh@344 -- # case "$op" in 00:07:06.746 23:31:40 version -- scripts/common.sh@345 -- # : 1 00:07:06.746 23:31:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.746 23:31:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.746 23:31:40 version -- scripts/common.sh@365 -- # decimal 1 00:07:06.746 23:31:40 version -- scripts/common.sh@353 -- # local d=1 00:07:06.746 23:31:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.746 23:31:40 version -- scripts/common.sh@355 -- # echo 1 00:07:06.747 23:31:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.747 23:31:40 version -- scripts/common.sh@366 -- # decimal 2 00:07:06.747 23:31:40 version -- scripts/common.sh@353 -- # local d=2 00:07:06.747 23:31:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.747 23:31:40 version -- scripts/common.sh@355 -- # echo 2 00:07:06.747 23:31:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.747 23:31:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.747 23:31:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.747 23:31:40 version -- scripts/common.sh@368 -- # return 0 00:07:06.747 23:31:40 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.747 23:31:40 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.747 --rc genhtml_branch_coverage=1 00:07:06.747 --rc genhtml_function_coverage=1 00:07:06.747 --rc genhtml_legend=1 00:07:06.747 --rc geninfo_all_blocks=1 00:07:06.747 --rc geninfo_unexecuted_blocks=1 00:07:06.747 00:07:06.747 ' 00:07:06.747 23:31:40 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.747 --rc genhtml_branch_coverage=1 00:07:06.747 --rc genhtml_function_coverage=1 00:07:06.747 --rc genhtml_legend=1 00:07:06.747 --rc geninfo_all_blocks=1 00:07:06.747 --rc geninfo_unexecuted_blocks=1 00:07:06.747 00:07:06.747 ' 00:07:06.747 23:31:40 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.747 --rc genhtml_branch_coverage=1 00:07:06.747 --rc genhtml_function_coverage=1 00:07:06.747 --rc genhtml_legend=1 00:07:06.747 --rc geninfo_all_blocks=1 00:07:06.747 --rc geninfo_unexecuted_blocks=1 00:07:06.747 00:07:06.747 ' 00:07:06.747 23:31:40 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.747 --rc genhtml_branch_coverage=1 00:07:06.747 --rc genhtml_function_coverage=1 00:07:06.747 --rc genhtml_legend=1 00:07:06.747 --rc geninfo_all_blocks=1 00:07:06.747 --rc geninfo_unexecuted_blocks=1 00:07:06.747 00:07:06.747 ' 00:07:06.747 23:31:40 version -- app/version.sh@17 -- # get_header_version major 00:07:06.747 23:31:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:06.747 23:31:40 version -- app/version.sh@14 -- # cut -f2 00:07:06.747 23:31:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.747 23:31:40 version -- app/version.sh@17 -- # major=25 00:07:06.747 23:31:40 version -- app/version.sh@18 -- # get_header_version minor 00:07:06.747 23:31:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:06.747 23:31:40 version -- app/version.sh@14 -- # cut -f2 00:07:06.747 23:31:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.747 23:31:40 version -- app/version.sh@18 -- # minor=1 00:07:06.747 23:31:40 version -- app/version.sh@19 -- # get_header_version patch 00:07:06.747 23:31:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:06.747 23:31:40 version -- app/version.sh@14 -- # cut -f2 00:07:06.747 23:31:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.747 23:31:40 version -- app/version.sh@19 -- # patch=0 00:07:06.747 23:31:40 version -- app/version.sh@20 -- # get_header_version suffix 00:07:06.747 23:31:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:06.747 23:31:40 version -- app/version.sh@14 -- # cut -f2 00:07:06.747 23:31:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.747 23:31:40 version -- app/version.sh@20 -- # suffix=-pre 00:07:06.747 23:31:40 version -- app/version.sh@22 -- # version=25.1 00:07:06.747 23:31:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:06.747 23:31:40 version -- app/version.sh@28 -- # version=25.1rc0 00:07:06.747 23:31:40 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:06.747 23:31:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:06.747 23:31:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:06.747 23:31:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:06.747 00:07:06.747 real 0m0.199s 00:07:06.747 user 0m0.122s 00:07:06.747 sys 0m0.102s 00:07:06.747 23:31:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.747 23:31:41 version -- common/autotest_common.sh@10 -- # set +x 00:07:06.747 ************************************ 00:07:06.747 END TEST version 00:07:06.747 ************************************ 00:07:06.747 23:31:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:06.747 23:31:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:06.747 23:31:41 -- spdk/autotest.sh@194 -- # uname -s 00:07:06.747 23:31:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:06.747 23:31:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:06.747 23:31:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:06.747 23:31:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:06.747 23:31:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:06.747 23:31:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:06.747 23:31:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.747 23:31:41 -- common/autotest_common.sh@10 -- # set +x 00:07:07.005 23:31:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:07.005 23:31:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:07.005 23:31:41 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:07.005 23:31:41 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:07.005 23:31:41 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:07.005 23:31:41 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:07.005 23:31:41 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.005 23:31:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.005 23:31:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.005 23:31:41 -- common/autotest_common.sh@10 -- # set +x 00:07:07.005 ************************************ 00:07:07.005 START TEST nvmf_tcp 00:07:07.005 ************************************ 00:07:07.005 23:31:41 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.005 * Looking for test storage... 00:07:07.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:07.005 23:31:41 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.005 23:31:41 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.005 23:31:41 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.005 23:31:41 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.005 23:31:41 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:07.005 23:31:41 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.005 23:31:41 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.005 --rc genhtml_branch_coverage=1 00:07:07.005 --rc genhtml_function_coverage=1 00:07:07.005 --rc genhtml_legend=1 00:07:07.005 --rc geninfo_all_blocks=1 00:07:07.005 --rc geninfo_unexecuted_blocks=1 00:07:07.005 00:07:07.005 ' 00:07:07.005 23:31:41 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.005 --rc genhtml_branch_coverage=1 00:07:07.005 --rc genhtml_function_coverage=1 00:07:07.005 --rc genhtml_legend=1 00:07:07.005 --rc geninfo_all_blocks=1 00:07:07.005 --rc geninfo_unexecuted_blocks=1 00:07:07.005 00:07:07.005 ' 00:07:07.005 23:31:41 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.005 --rc genhtml_branch_coverage=1 00:07:07.005 --rc genhtml_function_coverage=1 00:07:07.005 --rc genhtml_legend=1 00:07:07.005 --rc geninfo_all_blocks=1 00:07:07.005 --rc geninfo_unexecuted_blocks=1 00:07:07.005 00:07:07.005 ' 00:07:07.005 23:31:41 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.005 --rc genhtml_branch_coverage=1 00:07:07.005 --rc genhtml_function_coverage=1 00:07:07.006 --rc genhtml_legend=1 00:07:07.006 --rc geninfo_all_blocks=1 00:07:07.006 --rc geninfo_unexecuted_blocks=1 00:07:07.006 00:07:07.006 ' 00:07:07.006 23:31:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:07.006 23:31:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.006 23:31:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:07.006 23:31:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.006 23:31:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.006 23:31:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.006 ************************************ 00:07:07.006 START TEST nvmf_target_core 00:07:07.006 ************************************ 00:07:07.006 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:07.006 * Looking for test storage... 00:07:07.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:07.006 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.006 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.006 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.265 --rc genhtml_branch_coverage=1 00:07:07.265 --rc genhtml_function_coverage=1 00:07:07.265 --rc genhtml_legend=1 00:07:07.265 --rc geninfo_all_blocks=1 00:07:07.265 --rc geninfo_unexecuted_blocks=1 00:07:07.265 00:07:07.265 ' 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.265 --rc genhtml_branch_coverage=1 00:07:07.265 --rc genhtml_function_coverage=1 00:07:07.265 --rc genhtml_legend=1 00:07:07.265 --rc geninfo_all_blocks=1 00:07:07.265 --rc geninfo_unexecuted_blocks=1 00:07:07.265 00:07:07.265 ' 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.265 --rc genhtml_branch_coverage=1 00:07:07.265 --rc genhtml_function_coverage=1 00:07:07.265 --rc genhtml_legend=1 00:07:07.265 --rc geninfo_all_blocks=1 00:07:07.265 --rc geninfo_unexecuted_blocks=1 00:07:07.265 00:07:07.265 ' 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.265 --rc genhtml_branch_coverage=1 00:07:07.265 --rc genhtml_function_coverage=1 00:07:07.265 --rc genhtml_legend=1 00:07:07.265 --rc geninfo_all_blocks=1 00:07:07.265 --rc geninfo_unexecuted_blocks=1 00:07:07.265 00:07:07.265 ' 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.265 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.266 ************************************ 00:07:07.266 START TEST nvmf_abort 00:07:07.266 ************************************ 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:07.266 * Looking for test storage... 00:07:07.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.266 --rc genhtml_branch_coverage=1 00:07:07.266 --rc genhtml_function_coverage=1 00:07:07.266 --rc genhtml_legend=1 00:07:07.266 --rc geninfo_all_blocks=1 00:07:07.266 --rc geninfo_unexecuted_blocks=1 00:07:07.266 00:07:07.266 ' 00:07:07.266 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.266 --rc genhtml_branch_coverage=1 00:07:07.266 --rc genhtml_function_coverage=1 00:07:07.266 --rc genhtml_legend=1 00:07:07.266 --rc geninfo_all_blocks=1 00:07:07.266 --rc geninfo_unexecuted_blocks=1 00:07:07.266 00:07:07.266 ' 00:07:07.524 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.524 --rc genhtml_branch_coverage=1 00:07:07.524 --rc genhtml_function_coverage=1 00:07:07.524 --rc genhtml_legend=1 00:07:07.524 --rc geninfo_all_blocks=1 00:07:07.524 --rc geninfo_unexecuted_blocks=1 00:07:07.524 00:07:07.524 ' 00:07:07.524 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.524 --rc genhtml_branch_coverage=1 00:07:07.524 --rc genhtml_function_coverage=1 00:07:07.524 --rc genhtml_legend=1 00:07:07.524 --rc geninfo_all_blocks=1 00:07:07.524 --rc geninfo_unexecuted_blocks=1 00:07:07.524 00:07:07.524 ' 00:07:07.524 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:07.525 23:31:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:09.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:09.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:09.427 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:09.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:09.427 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:09.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:07:09.428 00:07:09.428 --- 10.0.0.2 ping statistics --- 00:07:09.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.428 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:09.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:07:09.428 00:07:09.428 --- 10.0.0.1 ping statistics --- 00:07:09.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.428 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=51685 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 51685 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 51685 ']' 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.428 23:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.686 [2024-11-19 23:31:43.771646] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:07:09.686 [2024-11-19 23:31:43.771719] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.686 [2024-11-19 23:31:43.849921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.686 [2024-11-19 23:31:43.903549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.686 [2024-11-19 23:31:43.903603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.686 [2024-11-19 23:31:43.903626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.686 [2024-11-19 23:31:43.903637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.686 [2024-11-19 23:31:43.903647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.686 [2024-11-19 23:31:43.905025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.686 [2024-11-19 23:31:43.905151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.686 [2024-11-19 23:31:43.905165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.945 [2024-11-19 23:31:44.049470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.945 Malloc0 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.945 Delay0 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.945 [2024-11-19 23:31:44.126293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.945 23:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:10.204 [2024-11-19 23:31:44.284186] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:12.103 Initializing NVMe Controllers 00:07:12.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:12.103 controller IO queue size 128 less than required 00:07:12.103 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:12.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:12.103 Initialization complete. Launching workers. 00:07:12.103 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28367 00:07:12.103 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28428, failed to submit 62 00:07:12.103 success 28371, unsuccessful 57, failed 0 00:07:12.103 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:12.103 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.103 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:12.103 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.103 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:12.103 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:12.103 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:12.103 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:12.103 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:12.103 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:12.103 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:12.103 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:12.103 rmmod nvme_tcp 00:07:12.361 rmmod nvme_fabrics 00:07:12.361 rmmod nvme_keyring 00:07:12.361 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:12.361 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 51685 ']' 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 51685 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 51685 ']' 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 51685 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 51685 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 51685' 00:07:12.362 killing process with pid 51685 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 51685 00:07:12.362 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 51685 00:07:12.621 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:12.621 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:12.621 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:12.621 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:12.621 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:12.621 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:12.621 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:12.621 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:12.621 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:12.621 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.621 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.621 23:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.530 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:14.530 00:07:14.530 real 0m7.342s 00:07:14.530 user 0m10.927s 00:07:14.530 sys 0m2.487s 00:07:14.530 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.530 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.530 ************************************ 00:07:14.530 END TEST nvmf_abort 00:07:14.530 ************************************ 00:07:14.530 23:31:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:14.530 23:31:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.530 23:31:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.530 23:31:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:14.530 ************************************ 00:07:14.530 START TEST nvmf_ns_hotplug_stress 00:07:14.530 ************************************ 00:07:14.530 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:14.790 * Looking for test storage... 00:07:14.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:14.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.790 --rc genhtml_branch_coverage=1 00:07:14.790 --rc genhtml_function_coverage=1 00:07:14.790 --rc genhtml_legend=1 00:07:14.790 --rc geninfo_all_blocks=1 00:07:14.790 --rc geninfo_unexecuted_blocks=1 00:07:14.790 00:07:14.790 ' 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:14.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.790 --rc genhtml_branch_coverage=1 00:07:14.790 --rc genhtml_function_coverage=1 00:07:14.790 --rc genhtml_legend=1 00:07:14.790 --rc geninfo_all_blocks=1 00:07:14.790 --rc geninfo_unexecuted_blocks=1 00:07:14.790 00:07:14.790 ' 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:14.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.790 --rc genhtml_branch_coverage=1 00:07:14.790 --rc genhtml_function_coverage=1 00:07:14.790 --rc genhtml_legend=1 00:07:14.790 --rc geninfo_all_blocks=1 00:07:14.790 --rc geninfo_unexecuted_blocks=1 00:07:14.790 00:07:14.790 ' 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:14.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.790 --rc genhtml_branch_coverage=1 00:07:14.790 --rc genhtml_function_coverage=1 00:07:14.790 --rc genhtml_legend=1 00:07:14.790 --rc geninfo_all_blocks=1 00:07:14.790 --rc geninfo_unexecuted_blocks=1 00:07:14.790 00:07:14.790 ' 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.790 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:14.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:14.791 23:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:17.323 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:17.323 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.323 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:17.324 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:17.324 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:17.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:07:17.324 00:07:17.324 --- 10.0.0.2 ping statistics --- 00:07:17.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.324 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:07:17.324 00:07:17.324 --- 10.0.0.1 ping statistics --- 00:07:17.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.324 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=53969 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 53969 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 53969 ']' 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.324 [2024-11-19 23:31:51.226668] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:07:17.324 [2024-11-19 23:31:51.226758] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.324 [2024-11-19 23:31:51.302663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.324 [2024-11-19 23:31:51.350716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.324 [2024-11-19 23:31:51.350777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.324 [2024-11-19 23:31:51.350790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.324 [2024-11-19 23:31:51.350801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.324 [2024-11-19 23:31:51.350810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.324 [2024-11-19 23:31:51.352261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.324 [2024-11-19 23:31:51.352322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.324 [2024-11-19 23:31:51.352326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:17.324 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:17.582 [2024-11-19 23:31:51.761712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.582 23:31:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:17.840 23:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.097 [2024-11-19 23:31:52.304290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.097 23:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:18.355 23:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:18.613 Malloc0 00:07:18.613 23:31:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:18.870 Delay0 00:07:18.870 23:31:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.128 23:31:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:19.386 NULL1 00:07:19.643 23:31:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:19.901 23:31:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=54391 00:07:19.901 23:31:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:19.901 23:31:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:19.901 23:31:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.159 23:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.416 23:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:20.416 23:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:20.674 true 00:07:20.674 23:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:20.674 23:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.932 23:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.189 23:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:21.189 23:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:21.447 true 00:07:21.447 23:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:21.447 23:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.704 23:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.962 23:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:21.962 23:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:22.220 true 00:07:22.220 23:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:22.220 23:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.152 Read completed with error (sct=0, sc=11) 00:07:23.152 23:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.410 23:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:23.410 23:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:23.667 true 00:07:23.667 23:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:23.667 23:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.925 23:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.183 23:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:24.183 23:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:24.440 true 00:07:24.698 23:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:24.698 23:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.262 23:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.519 23:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:25.519 23:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:25.777 true 00:07:25.777 23:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:25.777 23:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.342 23:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.342 23:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:26.342 23:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:26.600 true 00:07:26.600 23:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:26.600 23:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.165 23:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.165 23:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:27.165 23:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:27.423 true 00:07:27.423 23:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:27.423 23:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.356 23:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.921 23:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:28.921 23:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:28.921 true 00:07:28.921 23:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:28.921 23:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.178 23:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.436 23:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:29.436 23:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:30.001 true 00:07:30.001 23:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:30.001 23:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.001 23:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.259 23:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:30.259 23:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:30.516 true 00:07:30.516 23:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:30.516 23:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.889 23:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.889 23:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:31.889 23:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:32.146 true 00:07:32.147 23:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:32.147 23:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.404 23:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.662 23:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:32.662 23:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:32.920 true 00:07:32.920 23:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:32.920 23:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.178 23:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.436 23:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:33.436 23:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:33.694 true 00:07:33.694 23:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:33.694 23:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.066 23:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.066 23:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:35.067 23:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:35.324 true 00:07:35.324 23:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:35.324 23:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.582 23:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.839 23:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:35.839 23:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:36.097 true 00:07:36.097 23:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:36.097 23:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.355 23:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.613 23:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:36.613 23:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:36.871 true 00:07:36.871 23:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:36.871 23:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.246 23:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.246 23:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:38.246 23:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:38.504 true 00:07:38.504 23:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:38.504 23:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.761 23:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.020 23:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:39.020 23:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:39.277 true 00:07:39.277 23:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:39.277 23:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.535 23:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.793 23:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:39.793 23:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:40.051 true 00:07:40.051 23:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:40.051 23:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.985 23:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.243 23:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:41.243 23:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:41.501 true 00:07:41.501 23:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:41.501 23:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.758 23:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.016 23:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:42.016 23:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:42.274 true 00:07:42.274 23:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:42.274 23:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.206 23:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.464 23:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:43.464 23:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:43.722 true 00:07:43.722 23:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:43.722 23:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.980 23:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.238 23:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:44.238 23:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:44.506 true 00:07:44.506 23:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:44.506 23:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.550 23:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.550 23:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:45.550 23:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:45.807 true 00:07:45.807 23:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:45.807 23:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.065 23:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.323 23:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:46.323 23:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:46.581 true 00:07:46.839 23:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:46.839 23:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.771 23:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.771 23:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:47.771 23:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:48.029 true 00:07:48.029 23:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:48.029 23:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.286 23:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.543 23:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:48.543 23:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:48.801 true 00:07:48.801 23:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:48.801 23:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.733 23:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.990 23:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:49.990 23:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:50.247 true 00:07:50.247 Initializing NVMe Controllers 00:07:50.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:50.247 Controller IO queue size 128, less than required. 00:07:50.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:50.247 Controller IO queue size 128, less than required. 00:07:50.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:50.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:50.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:50.247 Initialization complete. Launching workers. 00:07:50.247 ======================================================== 00:07:50.247 Latency(us) 00:07:50.247 Device Information : IOPS MiB/s Average min max 00:07:50.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 456.80 0.22 113966.86 2563.80 1012553.53 00:07:50.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8514.27 4.16 14988.96 3458.47 449455.64 00:07:50.247 ======================================================== 00:07:50.247 Total : 8971.06 4.38 20028.81 2563.80 1012553.53 00:07:50.247 00:07:50.247 23:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 54391 00:07:50.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (54391) - No such process 00:07:50.247 23:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 54391 00:07:50.247 23:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.505 23:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:50.762 23:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:50.762 23:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:50.762 23:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:50.762 23:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.762 23:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:51.019 null0 00:07:51.019 23:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.019 23:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.019 23:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:51.276 null1 00:07:51.276 23:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.276 23:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.276 23:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:51.534 null2 00:07:51.534 23:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.534 23:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.534 23:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:51.791 null3 00:07:51.791 23:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.791 23:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.791 23:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:52.048 null4 00:07:52.048 23:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.048 23:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.048 23:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:52.306 null5 00:07:52.306 23:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.306 23:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.306 23:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:52.563 null6 00:07:52.563 23:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.563 23:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.563 23:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:52.822 null7 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 58347 58348 58350 58352 58354 58356 58358 58360 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.822 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.389 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.648 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.907 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.907 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.907 23:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.907 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.907 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.907 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.907 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.907 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.165 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.423 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.423 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.423 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.423 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.423 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.423 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.423 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.423 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.682 23:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.940 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.940 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.940 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.940 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.940 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.940 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.940 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.940 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.198 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.198 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.198 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.198 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.198 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.198 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.198 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.198 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.198 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.198 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.198 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.198 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.199 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.199 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.199 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.199 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.199 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.199 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.457 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.457 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.457 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.457 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.457 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.457 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.715 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.715 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.715 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.715 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.715 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.715 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.715 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.715 23:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.973 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.231 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.231 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.231 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.231 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.231 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.231 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.231 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.231 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.490 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.748 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.748 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.748 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.748 23:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.748 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.748 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.748 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.748 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.006 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.007 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.007 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.007 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.007 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.572 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.572 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.572 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.572 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.572 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.572 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.572 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.572 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.830 23:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.088 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.088 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.088 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.088 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.088 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.088 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.088 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.088 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.346 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.604 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.604 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.604 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.604 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.604 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.604 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.604 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.604 23:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.862 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.862 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.862 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.862 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.862 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.862 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.862 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.863 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:58.863 rmmod nvme_tcp 00:07:58.863 rmmod nvme_fabrics 00:07:58.863 rmmod nvme_keyring 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 53969 ']' 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 53969 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 53969 ']' 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 53969 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 53969 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 53969' 00:07:59.122 killing process with pid 53969 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 53969 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 53969 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:59.122 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.381 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.381 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.381 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.381 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.381 23:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.281 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.281 00:08:01.281 real 0m46.648s 00:08:01.281 user 3m38.027s 00:08:01.281 sys 0m15.856s 00:08:01.281 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.281 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.281 ************************************ 00:08:01.281 END TEST nvmf_ns_hotplug_stress 00:08:01.281 ************************************ 00:08:01.281 23:32:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:01.281 23:32:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.281 23:32:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.281 23:32:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.281 ************************************ 00:08:01.281 START TEST nvmf_delete_subsystem 00:08:01.281 ************************************ 00:08:01.281 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:01.281 * Looking for test storage... 00:08:01.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.281 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.281 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.281 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.539 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.539 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.539 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.539 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.539 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.540 --rc genhtml_branch_coverage=1 00:08:01.540 --rc genhtml_function_coverage=1 00:08:01.540 --rc genhtml_legend=1 00:08:01.540 --rc geninfo_all_blocks=1 00:08:01.540 --rc geninfo_unexecuted_blocks=1 00:08:01.540 00:08:01.540 ' 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.540 --rc genhtml_branch_coverage=1 00:08:01.540 --rc genhtml_function_coverage=1 00:08:01.540 --rc genhtml_legend=1 00:08:01.540 --rc geninfo_all_blocks=1 00:08:01.540 --rc geninfo_unexecuted_blocks=1 00:08:01.540 00:08:01.540 ' 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.540 --rc genhtml_branch_coverage=1 00:08:01.540 --rc genhtml_function_coverage=1 00:08:01.540 --rc genhtml_legend=1 00:08:01.540 --rc geninfo_all_blocks=1 00:08:01.540 --rc geninfo_unexecuted_blocks=1 00:08:01.540 00:08:01.540 ' 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.540 --rc genhtml_branch_coverage=1 00:08:01.540 --rc genhtml_function_coverage=1 00:08:01.540 --rc genhtml_legend=1 00:08:01.540 --rc geninfo_all_blocks=1 00:08:01.540 --rc geninfo_unexecuted_blocks=1 00:08:01.540 00:08:01.540 ' 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:01.540 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.541 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.541 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.541 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:01.541 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:01.541 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.541 23:32:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:03.441 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:03.441 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.441 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:03.442 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:03.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.442 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:08:03.701 00:08:03.701 --- 10.0.0.2 ping statistics --- 00:08:03.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.701 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:08:03.701 00:08:03.701 --- 10.0.0.1 ping statistics --- 00:08:03.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.701 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=61235 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 61235 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 61235 ']' 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.701 23:32:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.701 [2024-11-19 23:32:37.845615] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:08:03.701 [2024-11-19 23:32:37.845706] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.701 [2024-11-19 23:32:37.917274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:03.701 [2024-11-19 23:32:37.961266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.701 [2024-11-19 23:32:37.961322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.701 [2024-11-19 23:32:37.961346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.701 [2024-11-19 23:32:37.961373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.701 [2024-11-19 23:32:37.961383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.701 [2024-11-19 23:32:37.962756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.701 [2024-11-19 23:32:37.962761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.960 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.960 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:03.960 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.960 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.960 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.960 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.960 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.960 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.960 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.960 [2024-11-19 23:32:38.111933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.961 [2024-11-19 23:32:38.128217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.961 NULL1 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.961 Delay0 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=61269 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:03.961 23:32:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:03.961 [2024-11-19 23:32:38.212996] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:05.855 23:32:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.855 23:32:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.855 23:32:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 starting I/O failed: -6 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 starting I/O failed: -6 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 starting I/O failed: -6 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 starting I/O failed: -6 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 starting I/O failed: -6 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 starting I/O failed: -6 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 starting I/O failed: -6 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 starting I/O failed: -6 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 starting I/O failed: -6 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 starting I/O failed: -6 00:08:06.113 Read completed with error (sct=0, sc=8) 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 Write completed with error (sct=0, sc=8) 00:08:06.113 [2024-11-19 23:32:40.423367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1607b40 is same with the state(6) to be set 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 [2024-11-19 23:32:40.424790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16073f0 is same with the state(6) to be set 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 starting I/O failed: -6 00:08:06.371 Read completed with error (sct=0, sc=8) 00:08:06.371 Write completed with error (sct=0, sc=8) 00:08:06.372 starting I/O failed: -6 00:08:06.372 Read completed with error (sct=0, sc=8) 00:08:06.372 Read completed with error (sct=0, sc=8) 00:08:06.372 starting I/O failed: -6 00:08:06.372 Write completed with error (sct=0, sc=8) 00:08:06.372 Write completed with error (sct=0, sc=8) 00:08:06.372 starting I/O failed: -6 00:08:06.372 Write completed with error (sct=0, sc=8) 00:08:06.372 Write completed with error (sct=0, sc=8) 00:08:06.372 starting I/O failed: -6 00:08:06.372 Write completed with error (sct=0, sc=8) 00:08:06.372 Read completed with error (sct=0, sc=8) 00:08:06.372 starting I/O failed: -6 00:08:06.372 Read completed with error (sct=0, sc=8) 00:08:06.372 Read completed with error (sct=0, sc=8) 00:08:06.372 starting I/O failed: -6 00:08:06.372 Read completed with error (sct=0, sc=8) 00:08:06.372 Write completed with error (sct=0, sc=8) 00:08:06.372 starting I/O failed: -6 00:08:06.372 Read completed with error (sct=0, sc=8) 00:08:06.372 Read completed with error (sct=0, sc=8) 00:08:06.372 starting I/O failed: -6 00:08:06.372 Read completed with error (sct=0, sc=8) 00:08:06.372 Read completed with error (sct=0, sc=8) 00:08:06.372 starting I/O failed: -6 00:08:06.372 starting I/O failed: -6 00:08:06.372 starting I/O failed: -6 00:08:06.372 starting I/O failed: -6 00:08:06.372 starting I/O failed: -6 00:08:07.304 [2024-11-19 23:32:41.394263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16155b0 is same with the state(6) to be set 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 [2024-11-19 23:32:41.425712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f80a000d020 is same with the state(6) to be set 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.304 Read completed with error (sct=0, sc=8) 00:08:07.304 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 [2024-11-19 23:32:41.425980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f80a000d680 is same with the state(6) to be set 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 [2024-11-19 23:32:41.427524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1607810 is same with the state(6) to be set 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Write completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 Read completed with error (sct=0, sc=8) 00:08:07.305 [2024-11-19 23:32:41.427782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1607e70 is same with the state(6) to be set 00:08:07.305 Initializing NVMe Controllers 00:08:07.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:07.305 Controller IO queue size 128, less than required. 00:08:07.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:07.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:07.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:07.305 Initialization complete. Launching workers. 00:08:07.305 ======================================================== 00:08:07.305 Latency(us) 00:08:07.305 Device Information : IOPS MiB/s Average min max 00:08:07.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.81 0.08 906695.21 609.56 1042893.99 00:08:07.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 183.68 0.09 926691.47 603.27 2002356.03 00:08:07.305 ======================================================== 00:08:07.305 Total : 348.49 0.17 917234.55 603.27 2002356.03 00:08:07.305 00:08:07.305 [2024-11-19 23:32:41.428799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16155b0 (9): Bad file descriptor 00:08:07.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:07.305 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.305 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:07.305 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 61269 00:08:07.305 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 61269 00:08:07.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (61269) - No such process 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 61269 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 61269 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 61269 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.870 [2024-11-19 23:32:41.950491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=61780 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 61780 00:08:07.870 23:32:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:07.870 [2024-11-19 23:32:42.015744] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:08.435 23:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.435 23:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 61780 00:08:08.436 23:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:08.693 23:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.694 23:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 61780 00:08:08.694 23:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.259 23:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.259 23:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 61780 00:08:09.259 23:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.823 23:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.823 23:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 61780 00:08:09.823 23:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.388 23:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:10.388 23:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 61780 00:08:10.388 23:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.693 23:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:10.693 23:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 61780 00:08:10.693 23:32:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.976 Initializing NVMe Controllers 00:08:10.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:10.976 Controller IO queue size 128, less than required. 00:08:10.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:10.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:10.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:10.976 Initialization complete. Launching workers. 00:08:10.976 ======================================================== 00:08:10.976 Latency(us) 00:08:10.976 Device Information : IOPS MiB/s Average min max 00:08:10.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004202.04 1000189.16 1013164.64 00:08:10.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005836.50 1000209.58 1042247.93 00:08:10.976 ======================================================== 00:08:10.976 Total : 256.00 0.12 1005019.27 1000189.16 1042247.93 00:08:10.976 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 61780 00:08:11.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (61780) - No such process 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 61780 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.233 rmmod nvme_tcp 00:08:11.233 rmmod nvme_fabrics 00:08:11.233 rmmod nvme_keyring 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 61235 ']' 00:08:11.233 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 61235 00:08:11.234 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 61235 ']' 00:08:11.234 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 61235 00:08:11.234 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:11.234 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.234 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61235 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61235' 00:08:11.492 killing process with pid 61235 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 61235 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 61235 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.492 23:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.029 23:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:14.029 00:08:14.029 real 0m12.298s 00:08:14.029 user 0m28.010s 00:08:14.029 sys 0m2.941s 00:08:14.029 23:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.029 23:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.029 ************************************ 00:08:14.029 END TEST nvmf_delete_subsystem 00:08:14.029 ************************************ 00:08:14.029 23:32:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:14.029 23:32:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.029 23:32:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.029 23:32:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.029 ************************************ 00:08:14.029 START TEST nvmf_host_management 00:08:14.029 ************************************ 00:08:14.029 23:32:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:14.029 * Looking for test storage... 00:08:14.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.029 23:32:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.029 23:32:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.029 23:32:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.029 --rc genhtml_branch_coverage=1 00:08:14.029 --rc genhtml_function_coverage=1 00:08:14.029 --rc genhtml_legend=1 00:08:14.029 --rc geninfo_all_blocks=1 00:08:14.029 --rc geninfo_unexecuted_blocks=1 00:08:14.029 00:08:14.029 ' 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.029 --rc genhtml_branch_coverage=1 00:08:14.029 --rc genhtml_function_coverage=1 00:08:14.029 --rc genhtml_legend=1 00:08:14.029 --rc geninfo_all_blocks=1 00:08:14.029 --rc geninfo_unexecuted_blocks=1 00:08:14.029 00:08:14.029 ' 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.029 --rc genhtml_branch_coverage=1 00:08:14.029 --rc genhtml_function_coverage=1 00:08:14.029 --rc genhtml_legend=1 00:08:14.029 --rc geninfo_all_blocks=1 00:08:14.029 --rc geninfo_unexecuted_blocks=1 00:08:14.029 00:08:14.029 ' 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.029 --rc genhtml_branch_coverage=1 00:08:14.029 --rc genhtml_function_coverage=1 00:08:14.029 --rc genhtml_legend=1 00:08:14.029 --rc geninfo_all_blocks=1 00:08:14.029 --rc geninfo_unexecuted_blocks=1 00:08:14.029 00:08:14.029 ' 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.029 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.030 23:32:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:15.930 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.930 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.930 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.930 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.930 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:15.931 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:15.931 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:15.931 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:15.931 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.931 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:16.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:08:16.190 00:08:16.190 --- 10.0.0.2 ping statistics --- 00:08:16.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.190 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:08:16.190 00:08:16.190 --- 10.0.0.1 ping statistics --- 00:08:16.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.190 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=64167 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 64167 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64167 ']' 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.190 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.190 [2024-11-19 23:32:50.325569] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:08:16.190 [2024-11-19 23:32:50.325670] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.190 [2024-11-19 23:32:50.406199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.190 [2024-11-19 23:32:50.457831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.190 [2024-11-19 23:32:50.457892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.190 [2024-11-19 23:32:50.457910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.190 [2024-11-19 23:32:50.457924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.190 [2024-11-19 23:32:50.457935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.190 [2024-11-19 23:32:50.459700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.190 [2024-11-19 23:32:50.459790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.190 [2024-11-19 23:32:50.459859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:16.190 [2024-11-19 23:32:50.459861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.449 [2024-11-19 23:32:50.597855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.449 Malloc0 00:08:16.449 [2024-11-19 23:32:50.671844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64213 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64213 /var/tmp/bdevperf.sock 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64213 ']' 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:16.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.449 { 00:08:16.449 "params": { 00:08:16.449 "name": "Nvme$subsystem", 00:08:16.449 "trtype": "$TEST_TRANSPORT", 00:08:16.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.449 "adrfam": "ipv4", 00:08:16.449 "trsvcid": "$NVMF_PORT", 00:08:16.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.449 "hdgst": ${hdgst:-false}, 00:08:16.449 "ddgst": ${ddgst:-false} 00:08:16.449 }, 00:08:16.449 "method": "bdev_nvme_attach_controller" 00:08:16.449 } 00:08:16.449 EOF 00:08:16.449 )") 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:16.449 23:32:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.449 "params": { 00:08:16.449 "name": "Nvme0", 00:08:16.449 "trtype": "tcp", 00:08:16.449 "traddr": "10.0.0.2", 00:08:16.449 "adrfam": "ipv4", 00:08:16.449 "trsvcid": "4420", 00:08:16.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:16.449 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:16.449 "hdgst": false, 00:08:16.449 "ddgst": false 00:08:16.449 }, 00:08:16.449 "method": "bdev_nvme_attach_controller" 00:08:16.449 }' 00:08:16.449 [2024-11-19 23:32:50.756560] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:08:16.449 [2024-11-19 23:32:50.756637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64213 ] 00:08:16.708 [2024-11-19 23:32:50.825675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.708 [2024-11-19 23:32:50.872636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.966 Running I/O for 10 seconds... 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:16.966 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=577 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 577 -ge 100 ']' 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.225 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.225 [2024-11-19 23:32:51.483498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.225 [2024-11-19 23:32:51.483566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.225 [2024-11-19 23:32:51.483595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.225 [2024-11-19 23:32:51.483625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.225 [2024-11-19 23:32:51.483642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.225 [2024-11-19 23:32:51.483657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.225 [2024-11-19 23:32:51.483672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.225 [2024-11-19 23:32:51.483686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.225 [2024-11-19 23:32:51.483703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.225 [2024-11-19 23:32:51.483716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.225 [2024-11-19 23:32:51.483731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.225 [2024-11-19 23:32:51.483745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.225 [2024-11-19 23:32:51.483760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.225 [2024-11-19 23:32:51.483773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.225 [2024-11-19 23:32:51.483788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.225 [2024-11-19 23:32:51.483802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.225 [2024-11-19 23:32:51.483817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.225 [2024-11-19 23:32:51.483831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.483846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.483859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.483874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.483888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.483903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.483917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.483932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.483945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.483960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.483974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.483989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.226 [2024-11-19 23:32:51.484967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.226 [2024-11-19 23:32:51.484981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.484997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.227 [2024-11-19 23:32:51.485500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:17.227 [2024-11-19 23:32:51.485674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:17.227 [2024-11-19 23:32:51.485705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:17.227 [2024-11-19 23:32:51.485733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:17.227 [2024-11-19 23:32:51.485762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.227 [2024-11-19 23:32:51.485776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220fd70 is same with the state(6) to be set 00:08:17.227 [2024-11-19 23:32:51.486897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:17.227 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.227 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:17.227 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.227 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.227 task offset: 83328 on job bdev=Nvme0n1 fails 00:08:17.227 00:08:17.227 Latency(us) 00:08:17.227 [2024-11-19T22:32:51.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.227 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:17.227 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:17.227 Verification LBA range: start 0x0 length 0x400 00:08:17.227 Nvme0n1 : 0.40 1613.58 100.85 161.36 0.00 35003.10 2827.76 33981.63 00:08:17.227 [2024-11-19T22:32:51.539Z] =================================================================================================================== 00:08:17.227 [2024-11-19T22:32:51.539Z] Total : 1613.58 100.85 161.36 0.00 35003.10 2827.76 33981.63 00:08:17.227 [2024-11-19 23:32:51.488758] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.227 [2024-11-19 23:32:51.488787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220fd70 (9): Bad file descriptor 00:08:17.227 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.227 23:32:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:17.227 [2024-11-19 23:32:51.499616] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64213 00:08:18.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64213) - No such process 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:18.600 { 00:08:18.600 "params": { 00:08:18.600 "name": "Nvme$subsystem", 00:08:18.600 "trtype": "$TEST_TRANSPORT", 00:08:18.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.600 "adrfam": "ipv4", 00:08:18.600 "trsvcid": "$NVMF_PORT", 00:08:18.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.600 "hdgst": ${hdgst:-false}, 00:08:18.600 "ddgst": ${ddgst:-false} 00:08:18.600 }, 00:08:18.600 "method": "bdev_nvme_attach_controller" 00:08:18.600 } 00:08:18.600 EOF 00:08:18.600 )") 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:18.600 23:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:18.600 "params": { 00:08:18.600 "name": "Nvme0", 00:08:18.601 "trtype": "tcp", 00:08:18.601 "traddr": "10.0.0.2", 00:08:18.601 "adrfam": "ipv4", 00:08:18.601 "trsvcid": "4420", 00:08:18.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:18.601 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:18.601 "hdgst": false, 00:08:18.601 "ddgst": false 00:08:18.601 }, 00:08:18.601 "method": "bdev_nvme_attach_controller" 00:08:18.601 }' 00:08:18.601 [2024-11-19 23:32:52.546140] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:08:18.601 [2024-11-19 23:32:52.546219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64488 ] 00:08:18.601 [2024-11-19 23:32:52.615129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.601 [2024-11-19 23:32:52.662722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.859 Running I/O for 1 seconds... 00:08:19.793 1664.00 IOPS, 104.00 MiB/s 00:08:19.793 Latency(us) 00:08:19.793 [2024-11-19T22:32:54.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.793 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:19.793 Verification LBA range: start 0x0 length 0x400 00:08:19.793 Nvme0n1 : 1.02 1699.54 106.22 0.00 0.00 37041.65 7621.59 33204.91 00:08:19.793 [2024-11-19T22:32:54.105Z] =================================================================================================================== 00:08:19.793 [2024-11-19T22:32:54.105Z] Total : 1699.54 106.22 0.00 0.00 37041.65 7621.59 33204.91 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.051 rmmod nvme_tcp 00:08:20.051 rmmod nvme_fabrics 00:08:20.051 rmmod nvme_keyring 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 64167 ']' 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 64167 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 64167 ']' 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 64167 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64167 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64167' 00:08:20.051 killing process with pid 64167 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 64167 00:08:20.051 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 64167 00:08:20.310 [2024-11-19 23:32:54.491849] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:20.310 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.310 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.310 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.310 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:20.310 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:20.310 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.310 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:20.310 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.310 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.310 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.310 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.310 23:32:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:22.843 00:08:22.843 real 0m8.696s 00:08:22.843 user 0m19.100s 00:08:22.843 sys 0m2.761s 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:22.843 ************************************ 00:08:22.843 END TEST nvmf_host_management 00:08:22.843 ************************************ 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.843 ************************************ 00:08:22.843 START TEST nvmf_lvol 00:08:22.843 ************************************ 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:22.843 * Looking for test storage... 00:08:22.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:22.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.843 --rc genhtml_branch_coverage=1 00:08:22.843 --rc genhtml_function_coverage=1 00:08:22.843 --rc genhtml_legend=1 00:08:22.843 --rc geninfo_all_blocks=1 00:08:22.843 --rc geninfo_unexecuted_blocks=1 00:08:22.843 00:08:22.843 ' 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:22.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.843 --rc genhtml_branch_coverage=1 00:08:22.843 --rc genhtml_function_coverage=1 00:08:22.843 --rc genhtml_legend=1 00:08:22.843 --rc geninfo_all_blocks=1 00:08:22.843 --rc geninfo_unexecuted_blocks=1 00:08:22.843 00:08:22.843 ' 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:22.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.843 --rc genhtml_branch_coverage=1 00:08:22.843 --rc genhtml_function_coverage=1 00:08:22.843 --rc genhtml_legend=1 00:08:22.843 --rc geninfo_all_blocks=1 00:08:22.843 --rc geninfo_unexecuted_blocks=1 00:08:22.843 00:08:22.843 ' 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:22.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.843 --rc genhtml_branch_coverage=1 00:08:22.843 --rc genhtml_function_coverage=1 00:08:22.843 --rc genhtml_legend=1 00:08:22.843 --rc geninfo_all_blocks=1 00:08:22.843 --rc geninfo_unexecuted_blocks=1 00:08:22.843 00:08:22.843 ' 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.843 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:22.844 23:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.744 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:24.745 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:24.745 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:24.745 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:24.745 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:24.745 23:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:24.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:08:24.745 00:08:24.745 --- 10.0.0.2 ping statistics --- 00:08:24.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.745 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:08:24.745 00:08:24.745 --- 10.0.0.1 ping statistics --- 00:08:24.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.745 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=66681 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 66681 00:08:24.745 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 66681 ']' 00:08:24.746 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.746 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.746 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.746 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.746 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:25.004 [2024-11-19 23:32:59.093657] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:08:25.004 [2024-11-19 23:32:59.093730] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.004 [2024-11-19 23:32:59.173420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.004 [2024-11-19 23:32:59.222171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.004 [2024-11-19 23:32:59.222236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.004 [2024-11-19 23:32:59.222262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.004 [2024-11-19 23:32:59.222276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.004 [2024-11-19 23:32:59.222288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.004 [2024-11-19 23:32:59.223808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.004 [2024-11-19 23:32:59.223877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.004 [2024-11-19 23:32:59.223880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.261 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.261 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:25.261 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:25.261 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:25.261 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:25.261 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.261 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:25.519 [2024-11-19 23:32:59.601657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.519 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:25.776 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:25.776 23:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:26.035 23:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:26.035 23:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:26.292 23:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:26.548 23:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4ad036b0-cc35-47a6-acba-2b86a60e9b07 00:08:26.548 23:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4ad036b0-cc35-47a6-acba-2b86a60e9b07 lvol 20 00:08:26.805 23:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=33412425-c81e-42aa-af18-53a7fdca18d6 00:08:26.805 23:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:27.371 23:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 33412425-c81e-42aa-af18-53a7fdca18d6 00:08:27.371 23:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:27.628 [2024-11-19 23:33:01.929312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.885 23:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.142 23:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=67133 00:08:28.142 23:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:28.142 23:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:29.075 23:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 33412425-c81e-42aa-af18-53a7fdca18d6 MY_SNAPSHOT 00:08:29.333 23:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8a146ed5-40e6-4438-adf5-43b521420dbe 00:08:29.333 23:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 33412425-c81e-42aa-af18-53a7fdca18d6 30 00:08:29.591 23:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8a146ed5-40e6-4438-adf5-43b521420dbe MY_CLONE 00:08:30.156 23:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f48b33f0-f2d8-4dde-996d-f28d0762f49f 00:08:30.156 23:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f48b33f0-f2d8-4dde-996d-f28d0762f49f 00:08:30.722 23:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 67133 00:08:38.828 Initializing NVMe Controllers 00:08:38.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:38.828 Controller IO queue size 128, less than required. 00:08:38.828 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:38.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:38.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:38.828 Initialization complete. Launching workers. 00:08:38.828 ======================================================== 00:08:38.828 Latency(us) 00:08:38.828 Device Information : IOPS MiB/s Average min max 00:08:38.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10678.20 41.71 11989.78 1329.54 123663.54 00:08:38.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10625.00 41.50 12056.49 2109.53 52606.94 00:08:38.828 ======================================================== 00:08:38.828 Total : 21303.20 83.22 12023.05 1329.54 123663.54 00:08:38.828 00:08:38.828 23:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:38.828 23:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 33412425-c81e-42aa-af18-53a7fdca18d6 00:08:39.086 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ad036b0-cc35-47a6-acba-2b86a60e9b07 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.344 rmmod nvme_tcp 00:08:39.344 rmmod nvme_fabrics 00:08:39.344 rmmod nvme_keyring 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 66681 ']' 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 66681 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 66681 ']' 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 66681 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66681 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66681' 00:08:39.344 killing process with pid 66681 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 66681 00:08:39.344 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 66681 00:08:39.602 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.602 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.602 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.603 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:39.603 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:39.603 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.603 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.603 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.603 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.603 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.603 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.603 23:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.136 23:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.137 00:08:42.137 real 0m19.262s 00:08:42.137 user 1m5.594s 00:08:42.137 sys 0m5.553s 00:08:42.137 23:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.137 23:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.137 ************************************ 00:08:42.137 END TEST nvmf_lvol 00:08:42.137 ************************************ 00:08:42.137 23:33:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:42.137 23:33:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.137 23:33:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.137 23:33:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.137 ************************************ 00:08:42.137 START TEST nvmf_lvs_grow 00:08:42.137 ************************************ 00:08:42.137 23:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:42.137 * Looking for test storage... 00:08:42.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.137 23:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.137 23:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.137 23:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.137 --rc genhtml_branch_coverage=1 00:08:42.137 --rc genhtml_function_coverage=1 00:08:42.137 --rc genhtml_legend=1 00:08:42.137 --rc geninfo_all_blocks=1 00:08:42.137 --rc geninfo_unexecuted_blocks=1 00:08:42.137 00:08:42.137 ' 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.137 --rc genhtml_branch_coverage=1 00:08:42.137 --rc genhtml_function_coverage=1 00:08:42.137 --rc genhtml_legend=1 00:08:42.137 --rc geninfo_all_blocks=1 00:08:42.137 --rc geninfo_unexecuted_blocks=1 00:08:42.137 00:08:42.137 ' 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.137 --rc genhtml_branch_coverage=1 00:08:42.137 --rc genhtml_function_coverage=1 00:08:42.137 --rc genhtml_legend=1 00:08:42.137 --rc geninfo_all_blocks=1 00:08:42.137 --rc geninfo_unexecuted_blocks=1 00:08:42.137 00:08:42.137 ' 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.137 --rc genhtml_branch_coverage=1 00:08:42.137 --rc genhtml_function_coverage=1 00:08:42.137 --rc genhtml_legend=1 00:08:42.137 --rc geninfo_all_blocks=1 00:08:42.137 --rc geninfo_unexecuted_blocks=1 00:08:42.137 00:08:42.137 ' 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.137 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.138 23:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:44.038 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:44.038 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:44.038 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:44.038 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.038 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:44.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:08:44.297 00:08:44.297 --- 10.0.0.2 ping statistics --- 00:08:44.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.297 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:08:44.297 00:08:44.297 --- 10.0.0.1 ping statistics --- 00:08:44.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.297 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=71039 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 71039 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 71039 ']' 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.297 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:44.297 [2024-11-19 23:33:18.488019] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:08:44.297 [2024-11-19 23:33:18.488123] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.297 [2024-11-19 23:33:18.566413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.555 [2024-11-19 23:33:18.614184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.555 [2024-11-19 23:33:18.614243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.555 [2024-11-19 23:33:18.614259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.555 [2024-11-19 23:33:18.614273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.555 [2024-11-19 23:33:18.614285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.555 [2024-11-19 23:33:18.614949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.555 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.555 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:44.555 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.555 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.555 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:44.555 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.555 23:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:44.812 [2024-11-19 23:33:19.013554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:44.812 ************************************ 00:08:44.812 START TEST lvs_grow_clean 00:08:44.812 ************************************ 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.812 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.070 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:45.071 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:45.637 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=81063cc4-a55b-4cd6-b379-41a5ece33ffd 00:08:45.637 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81063cc4-a55b-4cd6-b379-41a5ece33ffd 00:08:45.637 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:45.896 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:45.896 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:45.896 23:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 81063cc4-a55b-4cd6-b379-41a5ece33ffd lvol 150 00:08:46.154 23:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=cb61bffc-e94f-4439-a425-9767fed1c0e9 00:08:46.154 23:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.154 23:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:46.413 [2024-11-19 23:33:20.535842] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:46.413 [2024-11-19 23:33:20.535951] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:46.413 true 00:08:46.413 23:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81063cc4-a55b-4cd6-b379-41a5ece33ffd 00:08:46.413 23:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:46.671 23:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:46.671 23:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:46.930 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cb61bffc-e94f-4439-a425-9767fed1c0e9 00:08:47.188 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:47.447 [2024-11-19 23:33:21.643335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.447 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.704 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71479 00:08:47.704 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:47.704 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:47.704 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71479 /var/tmp/bdevperf.sock 00:08:47.704 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 71479 ']' 00:08:47.704 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:47.705 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.705 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:47.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:47.705 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.705 23:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:47.705 [2024-11-19 23:33:21.972024] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:08:47.705 [2024-11-19 23:33:21.972137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71479 ] 00:08:47.963 [2024-11-19 23:33:22.043566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.963 [2024-11-19 23:33:22.092406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.963 23:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.963 23:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:47.963 23:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:48.528 Nvme0n1 00:08:48.528 23:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:48.787 [ 00:08:48.787 { 00:08:48.787 "name": "Nvme0n1", 00:08:48.787 "aliases": [ 00:08:48.787 "cb61bffc-e94f-4439-a425-9767fed1c0e9" 00:08:48.787 ], 00:08:48.787 "product_name": "NVMe disk", 00:08:48.787 "block_size": 4096, 00:08:48.787 "num_blocks": 38912, 00:08:48.787 "uuid": "cb61bffc-e94f-4439-a425-9767fed1c0e9", 00:08:48.787 "numa_id": 0, 00:08:48.787 "assigned_rate_limits": { 00:08:48.787 "rw_ios_per_sec": 0, 00:08:48.787 "rw_mbytes_per_sec": 0, 00:08:48.787 "r_mbytes_per_sec": 0, 00:08:48.787 "w_mbytes_per_sec": 0 00:08:48.787 }, 00:08:48.787 "claimed": false, 00:08:48.787 "zoned": false, 00:08:48.787 "supported_io_types": { 00:08:48.787 "read": true, 00:08:48.787 "write": true, 00:08:48.787 "unmap": true, 00:08:48.787 "flush": true, 00:08:48.787 "reset": true, 00:08:48.787 "nvme_admin": true, 00:08:48.787 "nvme_io": true, 00:08:48.787 "nvme_io_md": false, 00:08:48.787 "write_zeroes": true, 00:08:48.787 "zcopy": false, 00:08:48.787 "get_zone_info": false, 00:08:48.787 "zone_management": false, 00:08:48.787 "zone_append": false, 00:08:48.787 "compare": true, 00:08:48.787 "compare_and_write": true, 00:08:48.787 "abort": true, 00:08:48.787 "seek_hole": false, 00:08:48.787 "seek_data": false, 00:08:48.787 "copy": true, 00:08:48.787 "nvme_iov_md": false 00:08:48.787 }, 00:08:48.787 "memory_domains": [ 00:08:48.787 { 00:08:48.787 "dma_device_id": "system", 00:08:48.787 "dma_device_type": 1 00:08:48.787 } 00:08:48.787 ], 00:08:48.787 "driver_specific": { 00:08:48.787 "nvme": [ 00:08:48.787 { 00:08:48.787 "trid": { 00:08:48.787 "trtype": "TCP", 00:08:48.787 "adrfam": "IPv4", 00:08:48.787 "traddr": "10.0.0.2", 00:08:48.787 "trsvcid": "4420", 00:08:48.787 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:48.787 }, 00:08:48.787 "ctrlr_data": { 00:08:48.787 "cntlid": 1, 00:08:48.787 "vendor_id": "0x8086", 00:08:48.787 "model_number": "SPDK bdev Controller", 00:08:48.787 "serial_number": "SPDK0", 00:08:48.787 "firmware_revision": "25.01", 00:08:48.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:48.787 "oacs": { 00:08:48.787 "security": 0, 00:08:48.787 "format": 0, 00:08:48.787 "firmware": 0, 00:08:48.787 "ns_manage": 0 00:08:48.787 }, 00:08:48.787 "multi_ctrlr": true, 00:08:48.787 "ana_reporting": false 00:08:48.787 }, 00:08:48.787 "vs": { 00:08:48.787 "nvme_version": "1.3" 00:08:48.787 }, 00:08:48.787 "ns_data": { 00:08:48.787 "id": 1, 00:08:48.787 "can_share": true 00:08:48.787 } 00:08:48.787 } 00:08:48.787 ], 00:08:48.787 "mp_policy": "active_passive" 00:08:48.787 } 00:08:48.787 } 00:08:48.787 ] 00:08:48.787 23:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=71613 00:08:48.787 23:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:48.787 23:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:48.787 Running I/O for 10 seconds... 00:08:50.161 Latency(us) 00:08:50.161 [2024-11-19T22:33:24.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.162 Nvme0n1 : 1.00 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:08:50.162 [2024-11-19T22:33:24.474Z] =================================================================================================================== 00:08:50.162 [2024-11-19T22:33:24.474Z] Total : 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:08:50.162 00:08:50.734 23:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 81063cc4-a55b-4cd6-b379-41a5ece33ffd 00:08:51.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.098 Nvme0n1 : 2.00 14288.00 55.81 0.00 0.00 0.00 0.00 0.00 00:08:51.098 [2024-11-19T22:33:25.410Z] =================================================================================================================== 00:08:51.098 [2024-11-19T22:33:25.410Z] Total : 14288.00 55.81 0.00 0.00 0.00 0.00 0.00 00:08:51.098 00:08:51.098 true 00:08:51.098 23:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81063cc4-a55b-4cd6-b379-41a5ece33ffd 00:08:51.098 23:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:51.356 23:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:51.356 23:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:51.356 23:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 71613 00:08:51.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.920 Nvme0n1 : 3.00 14351.33 56.06 0.00 0.00 0.00 0.00 0.00 00:08:51.920 [2024-11-19T22:33:26.232Z] =================================================================================================================== 00:08:51.920 [2024-11-19T22:33:26.232Z] Total : 14351.33 56.06 0.00 0.00 0.00 0.00 0.00 00:08:51.920 00:08:52.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.851 Nvme0n1 : 4.00 14446.50 56.43 0.00 0.00 0.00 0.00 0.00 00:08:52.851 [2024-11-19T22:33:27.163Z] =================================================================================================================== 00:08:52.851 [2024-11-19T22:33:27.163Z] Total : 14446.50 56.43 0.00 0.00 0.00 0.00 0.00 00:08:52.851 00:08:53.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.782 Nvme0n1 : 5.00 14503.60 56.65 0.00 0.00 0.00 0.00 0.00 00:08:53.782 [2024-11-19T22:33:28.094Z] =================================================================================================================== 00:08:53.782 [2024-11-19T22:33:28.094Z] Total : 14503.60 56.65 0.00 0.00 0.00 0.00 0.00 00:08:53.782 00:08:55.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.154 Nvme0n1 : 6.00 14541.67 56.80 0.00 0.00 0.00 0.00 0.00 00:08:55.154 [2024-11-19T22:33:29.466Z] =================================================================================================================== 00:08:55.154 [2024-11-19T22:33:29.466Z] Total : 14541.67 56.80 0.00 0.00 0.00 0.00 0.00 00:08:55.154 00:08:56.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.088 Nvme0n1 : 7.00 14568.86 56.91 0.00 0.00 0.00 0.00 0.00 00:08:56.088 [2024-11-19T22:33:30.400Z] =================================================================================================================== 00:08:56.088 [2024-11-19T22:33:30.400Z] Total : 14568.86 56.91 0.00 0.00 0.00 0.00 0.00 00:08:56.088 00:08:57.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.021 Nvme0n1 : 8.00 14590.00 56.99 0.00 0.00 0.00 0.00 0.00 00:08:57.021 [2024-11-19T22:33:31.333Z] =================================================================================================================== 00:08:57.021 [2024-11-19T22:33:31.333Z] Total : 14590.00 56.99 0.00 0.00 0.00 0.00 0.00 00:08:57.021 00:08:57.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.956 Nvme0n1 : 9.00 14605.78 57.05 0.00 0.00 0.00 0.00 0.00 00:08:57.956 [2024-11-19T22:33:32.268Z] =================================================================================================================== 00:08:57.956 [2024-11-19T22:33:32.268Z] Total : 14605.78 57.05 0.00 0.00 0.00 0.00 0.00 00:08:57.956 00:08:58.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.891 Nvme0n1 : 10.00 14631.10 57.15 0.00 0.00 0.00 0.00 0.00 00:08:58.891 [2024-11-19T22:33:33.203Z] =================================================================================================================== 00:08:58.891 [2024-11-19T22:33:33.203Z] Total : 14631.10 57.15 0.00 0.00 0.00 0.00 0.00 00:08:58.891 00:08:58.891 00:08:58.891 Latency(us) 00:08:58.891 [2024-11-19T22:33:33.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.891 Nvme0n1 : 10.01 14631.47 57.15 0.00 0.00 8743.32 2354.44 16990.81 00:08:58.891 [2024-11-19T22:33:33.203Z] =================================================================================================================== 00:08:58.891 [2024-11-19T22:33:33.203Z] Total : 14631.47 57.15 0.00 0.00 8743.32 2354.44 16990.81 00:08:58.891 { 00:08:58.891 "results": [ 00:08:58.891 { 00:08:58.891 "job": "Nvme0n1", 00:08:58.891 "core_mask": "0x2", 00:08:58.891 "workload": "randwrite", 00:08:58.891 "status": "finished", 00:08:58.891 "queue_depth": 128, 00:08:58.891 "io_size": 4096, 00:08:58.891 "runtime": 10.008496, 00:08:58.891 "iops": 14631.46910384937, 00:08:58.891 "mibps": 57.1541761869116, 00:08:58.891 "io_failed": 0, 00:08:58.891 "io_timeout": 0, 00:08:58.891 "avg_latency_us": 8743.318933667995, 00:08:58.891 "min_latency_us": 2354.4414814814813, 00:08:58.891 "max_latency_us": 16990.814814814814 00:08:58.891 } 00:08:58.891 ], 00:08:58.891 "core_count": 1 00:08:58.891 } 00:08:58.891 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71479 00:08:58.891 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 71479 ']' 00:08:58.891 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 71479 00:08:58.891 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:58.891 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.891 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71479 00:08:58.891 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:58.891 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:58.891 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71479' 00:08:58.891 killing process with pid 71479 00:08:58.891 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 71479 00:08:58.891 Received shutdown signal, test time was about 10.000000 seconds 00:08:58.891 00:08:58.891 Latency(us) 00:08:58.891 [2024-11-19T22:33:33.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.891 [2024-11-19T22:33:33.204Z] =================================================================================================================== 00:08:58.892 [2024-11-19T22:33:33.204Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:58.892 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 71479 00:08:59.149 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:59.406 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:59.664 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81063cc4-a55b-4cd6-b379-41a5ece33ffd 00:08:59.664 23:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:59.921 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:59.921 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:59.921 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:00.179 [2024-11-19 23:33:34.423006] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:00.179 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81063cc4-a55b-4cd6-b379-41a5ece33ffd 00:09:00.179 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:00.179 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81063cc4-a55b-4cd6-b379-41a5ece33ffd 00:09:00.179 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.179 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.179 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.179 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.179 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.179 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.179 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.179 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:00.179 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81063cc4-a55b-4cd6-b379-41a5ece33ffd 00:09:00.436 request: 00:09:00.436 { 00:09:00.436 "uuid": "81063cc4-a55b-4cd6-b379-41a5ece33ffd", 00:09:00.436 "method": "bdev_lvol_get_lvstores", 00:09:00.436 "req_id": 1 00:09:00.436 } 00:09:00.436 Got JSON-RPC error response 00:09:00.436 response: 00:09:00.436 { 00:09:00.436 "code": -19, 00:09:00.436 "message": "No such device" 00:09:00.436 } 00:09:00.436 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:00.436 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:00.436 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:00.436 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:00.436 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:00.693 aio_bdev 00:09:00.693 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cb61bffc-e94f-4439-a425-9767fed1c0e9 00:09:00.693 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=cb61bffc-e94f-4439-a425-9767fed1c0e9 00:09:00.693 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.693 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:00.693 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.693 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.693 23:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:00.952 23:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cb61bffc-e94f-4439-a425-9767fed1c0e9 -t 2000 00:09:01.210 [ 00:09:01.210 { 00:09:01.210 "name": "cb61bffc-e94f-4439-a425-9767fed1c0e9", 00:09:01.210 "aliases": [ 00:09:01.210 "lvs/lvol" 00:09:01.210 ], 00:09:01.210 "product_name": "Logical Volume", 00:09:01.210 "block_size": 4096, 00:09:01.210 "num_blocks": 38912, 00:09:01.210 "uuid": "cb61bffc-e94f-4439-a425-9767fed1c0e9", 00:09:01.210 "assigned_rate_limits": { 00:09:01.210 "rw_ios_per_sec": 0, 00:09:01.210 "rw_mbytes_per_sec": 0, 00:09:01.210 "r_mbytes_per_sec": 0, 00:09:01.210 "w_mbytes_per_sec": 0 00:09:01.210 }, 00:09:01.210 "claimed": false, 00:09:01.210 "zoned": false, 00:09:01.210 "supported_io_types": { 00:09:01.210 "read": true, 00:09:01.210 "write": true, 00:09:01.210 "unmap": true, 00:09:01.210 "flush": false, 00:09:01.210 "reset": true, 00:09:01.210 "nvme_admin": false, 00:09:01.210 "nvme_io": false, 00:09:01.210 "nvme_io_md": false, 00:09:01.210 "write_zeroes": true, 00:09:01.210 "zcopy": false, 00:09:01.210 "get_zone_info": false, 00:09:01.210 "zone_management": false, 00:09:01.210 "zone_append": false, 00:09:01.210 "compare": false, 00:09:01.210 "compare_and_write": false, 00:09:01.210 "abort": false, 00:09:01.210 "seek_hole": true, 00:09:01.210 "seek_data": true, 00:09:01.210 "copy": false, 00:09:01.210 "nvme_iov_md": false 00:09:01.210 }, 00:09:01.210 "driver_specific": { 00:09:01.210 "lvol": { 00:09:01.210 "lvol_store_uuid": "81063cc4-a55b-4cd6-b379-41a5ece33ffd", 00:09:01.210 "base_bdev": "aio_bdev", 00:09:01.210 "thin_provision": false, 00:09:01.210 "num_allocated_clusters": 38, 00:09:01.210 "snapshot": false, 00:09:01.210 "clone": false, 00:09:01.210 "esnap_clone": false 00:09:01.210 } 00:09:01.210 } 00:09:01.210 } 00:09:01.210 ] 00:09:01.467 23:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:01.467 23:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81063cc4-a55b-4cd6-b379-41a5ece33ffd 00:09:01.467 23:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:01.724 23:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:01.724 23:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81063cc4-a55b-4cd6-b379-41a5ece33ffd 00:09:01.724 23:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:01.981 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:01.981 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cb61bffc-e94f-4439-a425-9767fed1c0e9 00:09:02.239 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 81063cc4-a55b-4cd6-b379-41a5ece33ffd 00:09:02.496 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:02.754 00:09:02.754 real 0m17.866s 00:09:02.754 user 0m17.359s 00:09:02.754 sys 0m1.855s 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:02.754 ************************************ 00:09:02.754 END TEST lvs_grow_clean 00:09:02.754 ************************************ 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:02.754 ************************************ 00:09:02.754 START TEST lvs_grow_dirty 00:09:02.754 ************************************ 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:02.754 23:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:03.011 23:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:03.012 23:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:03.269 23:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:03.269 23:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:03.269 23:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:03.527 23:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:03.527 23:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:03.527 23:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 lvol 150 00:09:03.785 23:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f089fa14-78c2-4ab2-a743-6d997c5555c3 00:09:03.785 23:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:03.785 23:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:04.043 [2024-11-19 23:33:38.349546] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:04.043 [2024-11-19 23:33:38.349656] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:04.043 true 00:09:04.301 23:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:04.301 23:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:04.559 23:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:04.559 23:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:04.816 23:33:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f089fa14-78c2-4ab2-a743-6d997c5555c3 00:09:05.073 23:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:05.329 [2024-11-19 23:33:39.444929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.329 23:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:05.587 23:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73675 00:09:05.587 23:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:05.587 23:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:05.587 23:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73675 /var/tmp/bdevperf.sock 00:09:05.587 23:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 73675 ']' 00:09:05.587 23:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:05.587 23:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.587 23:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:05.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:05.587 23:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.587 23:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:05.587 [2024-11-19 23:33:39.777796] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:09:05.587 [2024-11-19 23:33:39.777881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73675 ] 00:09:05.587 [2024-11-19 23:33:39.848065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.587 [2024-11-19 23:33:39.896842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.845 23:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.845 23:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:05.845 23:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:06.103 Nvme0n1 00:09:06.103 23:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:06.361 [ 00:09:06.361 { 00:09:06.361 "name": "Nvme0n1", 00:09:06.361 "aliases": [ 00:09:06.361 "f089fa14-78c2-4ab2-a743-6d997c5555c3" 00:09:06.361 ], 00:09:06.361 "product_name": "NVMe disk", 00:09:06.361 "block_size": 4096, 00:09:06.361 "num_blocks": 38912, 00:09:06.361 "uuid": "f089fa14-78c2-4ab2-a743-6d997c5555c3", 00:09:06.361 "numa_id": 0, 00:09:06.361 "assigned_rate_limits": { 00:09:06.361 "rw_ios_per_sec": 0, 00:09:06.361 "rw_mbytes_per_sec": 0, 00:09:06.361 "r_mbytes_per_sec": 0, 00:09:06.361 "w_mbytes_per_sec": 0 00:09:06.361 }, 00:09:06.361 "claimed": false, 00:09:06.361 "zoned": false, 00:09:06.361 "supported_io_types": { 00:09:06.361 "read": true, 00:09:06.361 "write": true, 00:09:06.361 "unmap": true, 00:09:06.361 "flush": true, 00:09:06.361 "reset": true, 00:09:06.361 "nvme_admin": true, 00:09:06.361 "nvme_io": true, 00:09:06.361 "nvme_io_md": false, 00:09:06.361 "write_zeroes": true, 00:09:06.361 "zcopy": false, 00:09:06.361 "get_zone_info": false, 00:09:06.361 "zone_management": false, 00:09:06.361 "zone_append": false, 00:09:06.361 "compare": true, 00:09:06.361 "compare_and_write": true, 00:09:06.361 "abort": true, 00:09:06.361 "seek_hole": false, 00:09:06.361 "seek_data": false, 00:09:06.361 "copy": true, 00:09:06.361 "nvme_iov_md": false 00:09:06.361 }, 00:09:06.361 "memory_domains": [ 00:09:06.361 { 00:09:06.361 "dma_device_id": "system", 00:09:06.361 "dma_device_type": 1 00:09:06.361 } 00:09:06.361 ], 00:09:06.361 "driver_specific": { 00:09:06.361 "nvme": [ 00:09:06.361 { 00:09:06.361 "trid": { 00:09:06.361 "trtype": "TCP", 00:09:06.361 "adrfam": "IPv4", 00:09:06.361 "traddr": "10.0.0.2", 00:09:06.361 "trsvcid": "4420", 00:09:06.361 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:06.361 }, 00:09:06.361 "ctrlr_data": { 00:09:06.361 "cntlid": 1, 00:09:06.361 "vendor_id": "0x8086", 00:09:06.361 "model_number": "SPDK bdev Controller", 00:09:06.361 "serial_number": "SPDK0", 00:09:06.361 "firmware_revision": "25.01", 00:09:06.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:06.361 "oacs": { 00:09:06.361 "security": 0, 00:09:06.361 "format": 0, 00:09:06.361 "firmware": 0, 00:09:06.361 "ns_manage": 0 00:09:06.361 }, 00:09:06.361 "multi_ctrlr": true, 00:09:06.361 "ana_reporting": false 00:09:06.361 }, 00:09:06.361 "vs": { 00:09:06.361 "nvme_version": "1.3" 00:09:06.361 }, 00:09:06.361 "ns_data": { 00:09:06.361 "id": 1, 00:09:06.361 "can_share": true 00:09:06.361 } 00:09:06.361 } 00:09:06.361 ], 00:09:06.361 "mp_policy": "active_passive" 00:09:06.361 } 00:09:06.361 } 00:09:06.361 ] 00:09:06.361 23:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73685 00:09:06.361 23:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:06.361 23:33:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:06.620 Running I/O for 10 seconds... 00:09:07.553 Latency(us) 00:09:07.553 [2024-11-19T22:33:41.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.553 Nvme0n1 : 1.00 13908.00 54.33 0.00 0.00 0.00 0.00 0.00 00:09:07.553 [2024-11-19T22:33:41.865Z] =================================================================================================================== 00:09:07.553 [2024-11-19T22:33:41.865Z] Total : 13908.00 54.33 0.00 0.00 0.00 0.00 0.00 00:09:07.553 00:09:08.487 23:33:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:08.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.487 Nvme0n1 : 2.00 14162.50 55.32 0.00 0.00 0.00 0.00 0.00 00:09:08.487 [2024-11-19T22:33:42.799Z] =================================================================================================================== 00:09:08.487 [2024-11-19T22:33:42.799Z] Total : 14162.50 55.32 0.00 0.00 0.00 0.00 0.00 00:09:08.487 00:09:08.745 true 00:09:08.745 23:33:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:08.745 23:33:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:09.003 23:33:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:09.003 23:33:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:09.003 23:33:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 73685 00:09:09.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.568 Nvme0n1 : 3.00 14247.00 55.65 0.00 0.00 0.00 0.00 0.00 00:09:09.568 [2024-11-19T22:33:43.880Z] =================================================================================================================== 00:09:09.568 [2024-11-19T22:33:43.880Z] Total : 14247.00 55.65 0.00 0.00 0.00 0.00 0.00 00:09:09.568 00:09:10.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.500 Nvme0n1 : 4.00 14321.00 55.94 0.00 0.00 0.00 0.00 0.00 00:09:10.500 [2024-11-19T22:33:44.812Z] =================================================================================================================== 00:09:10.500 [2024-11-19T22:33:44.812Z] Total : 14321.00 55.94 0.00 0.00 0.00 0.00 0.00 00:09:10.500 00:09:11.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.433 Nvme0n1 : 5.00 14403.20 56.26 0.00 0.00 0.00 0.00 0.00 00:09:11.433 [2024-11-19T22:33:45.745Z] =================================================================================================================== 00:09:11.433 [2024-11-19T22:33:45.745Z] Total : 14403.20 56.26 0.00 0.00 0.00 0.00 0.00 00:09:11.433 00:09:12.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.806 Nvme0n1 : 6.00 14439.67 56.40 0.00 0.00 0.00 0.00 0.00 00:09:12.806 [2024-11-19T22:33:47.118Z] =================================================================================================================== 00:09:12.806 [2024-11-19T22:33:47.118Z] Total : 14439.67 56.40 0.00 0.00 0.00 0.00 0.00 00:09:12.806 00:09:13.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.740 Nvme0n1 : 7.00 14499.57 56.64 0.00 0.00 0.00 0.00 0.00 00:09:13.740 [2024-11-19T22:33:48.052Z] =================================================================================================================== 00:09:13.740 [2024-11-19T22:33:48.052Z] Total : 14499.57 56.64 0.00 0.00 0.00 0.00 0.00 00:09:13.740 00:09:14.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.673 Nvme0n1 : 8.00 14544.50 56.81 0.00 0.00 0.00 0.00 0.00 00:09:14.673 [2024-11-19T22:33:48.985Z] =================================================================================================================== 00:09:14.673 [2024-11-19T22:33:48.985Z] Total : 14544.50 56.81 0.00 0.00 0.00 0.00 0.00 00:09:14.673 00:09:15.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.605 Nvme0n1 : 9.00 14565.33 56.90 0.00 0.00 0.00 0.00 0.00 00:09:15.605 [2024-11-19T22:33:49.917Z] =================================================================================================================== 00:09:15.605 [2024-11-19T22:33:49.917Z] Total : 14565.33 56.90 0.00 0.00 0.00 0.00 0.00 00:09:15.605 00:09:16.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.538 Nvme0n1 : 10.00 14595.30 57.01 0.00 0.00 0.00 0.00 0.00 00:09:16.538 [2024-11-19T22:33:50.850Z] =================================================================================================================== 00:09:16.538 [2024-11-19T22:33:50.850Z] Total : 14595.30 57.01 0.00 0.00 0.00 0.00 0.00 00:09:16.538 00:09:16.538 00:09:16.538 Latency(us) 00:09:16.538 [2024-11-19T22:33:50.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.538 Nvme0n1 : 10.00 14602.24 57.04 0.00 0.00 8761.15 4636.07 17476.27 00:09:16.538 [2024-11-19T22:33:50.850Z] =================================================================================================================== 00:09:16.538 [2024-11-19T22:33:50.850Z] Total : 14602.24 57.04 0.00 0.00 8761.15 4636.07 17476.27 00:09:16.538 { 00:09:16.538 "results": [ 00:09:16.538 { 00:09:16.538 "job": "Nvme0n1", 00:09:16.538 "core_mask": "0x2", 00:09:16.538 "workload": "randwrite", 00:09:16.538 "status": "finished", 00:09:16.538 "queue_depth": 128, 00:09:16.538 "io_size": 4096, 00:09:16.538 "runtime": 10.004016, 00:09:16.538 "iops": 14602.235742125962, 00:09:16.538 "mibps": 57.03998336767954, 00:09:16.538 "io_failed": 0, 00:09:16.538 "io_timeout": 0, 00:09:16.538 "avg_latency_us": 8761.148771090215, 00:09:16.538 "min_latency_us": 4636.065185185185, 00:09:16.538 "max_latency_us": 17476.266666666666 00:09:16.538 } 00:09:16.538 ], 00:09:16.538 "core_count": 1 00:09:16.538 } 00:09:16.538 23:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73675 00:09:16.538 23:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 73675 ']' 00:09:16.538 23:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 73675 00:09:16.538 23:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:16.538 23:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.538 23:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73675 00:09:16.538 23:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:16.538 23:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:16.538 23:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73675' 00:09:16.538 killing process with pid 73675 00:09:16.538 23:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 73675 00:09:16.538 Received shutdown signal, test time was about 10.000000 seconds 00:09:16.538 00:09:16.538 Latency(us) 00:09:16.538 [2024-11-19T22:33:50.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.538 [2024-11-19T22:33:50.850Z] =================================================================================================================== 00:09:16.538 [2024-11-19T22:33:50.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:16.538 23:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 73675 00:09:16.796 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:17.053 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:17.312 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:17.312 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 71039 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 71039 00:09:17.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 71039 Killed "${NVMF_APP[@]}" "$@" 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=75023 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 75023 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 75023 ']' 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.570 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.571 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.571 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.571 23:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:17.829 [2024-11-19 23:33:51.913970] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:09:17.829 [2024-11-19 23:33:51.914080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.829 [2024-11-19 23:33:51.987202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.829 [2024-11-19 23:33:52.032925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.829 [2024-11-19 23:33:52.032992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.829 [2024-11-19 23:33:52.033020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.829 [2024-11-19 23:33:52.033031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.829 [2024-11-19 23:33:52.033041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.829 [2024-11-19 23:33:52.033679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.089 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.089 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:18.089 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.089 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.089 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:18.089 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.089 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.347 [2024-11-19 23:33:52.426196] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:18.347 [2024-11-19 23:33:52.426364] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:18.347 [2024-11-19 23:33:52.426426] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:18.347 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:18.347 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f089fa14-78c2-4ab2-a743-6d997c5555c3 00:09:18.347 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f089fa14-78c2-4ab2-a743-6d997c5555c3 00:09:18.347 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.347 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:18.347 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.347 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.347 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:18.605 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f089fa14-78c2-4ab2-a743-6d997c5555c3 -t 2000 00:09:18.863 [ 00:09:18.863 { 00:09:18.863 "name": "f089fa14-78c2-4ab2-a743-6d997c5555c3", 00:09:18.863 "aliases": [ 00:09:18.863 "lvs/lvol" 00:09:18.863 ], 00:09:18.863 "product_name": "Logical Volume", 00:09:18.863 "block_size": 4096, 00:09:18.863 "num_blocks": 38912, 00:09:18.863 "uuid": "f089fa14-78c2-4ab2-a743-6d997c5555c3", 00:09:18.863 "assigned_rate_limits": { 00:09:18.863 "rw_ios_per_sec": 0, 00:09:18.863 "rw_mbytes_per_sec": 0, 00:09:18.863 "r_mbytes_per_sec": 0, 00:09:18.863 "w_mbytes_per_sec": 0 00:09:18.863 }, 00:09:18.863 "claimed": false, 00:09:18.863 "zoned": false, 00:09:18.863 "supported_io_types": { 00:09:18.863 "read": true, 00:09:18.863 "write": true, 00:09:18.863 "unmap": true, 00:09:18.863 "flush": false, 00:09:18.863 "reset": true, 00:09:18.863 "nvme_admin": false, 00:09:18.863 "nvme_io": false, 00:09:18.863 "nvme_io_md": false, 00:09:18.863 "write_zeroes": true, 00:09:18.863 "zcopy": false, 00:09:18.863 "get_zone_info": false, 00:09:18.863 "zone_management": false, 00:09:18.863 "zone_append": false, 00:09:18.863 "compare": false, 00:09:18.863 "compare_and_write": false, 00:09:18.863 "abort": false, 00:09:18.863 "seek_hole": true, 00:09:18.863 "seek_data": true, 00:09:18.863 "copy": false, 00:09:18.863 "nvme_iov_md": false 00:09:18.863 }, 00:09:18.863 "driver_specific": { 00:09:18.863 "lvol": { 00:09:18.863 "lvol_store_uuid": "29181786-68c7-4bc2-8ed8-3bb16bebb168", 00:09:18.863 "base_bdev": "aio_bdev", 00:09:18.863 "thin_provision": false, 00:09:18.863 "num_allocated_clusters": 38, 00:09:18.863 "snapshot": false, 00:09:18.863 "clone": false, 00:09:18.863 "esnap_clone": false 00:09:18.863 } 00:09:18.863 } 00:09:18.863 } 00:09:18.863 ] 00:09:18.863 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:18.863 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:18.863 23:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:19.121 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:19.121 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:19.121 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:19.378 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:19.378 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.637 [2024-11-19 23:33:53.779764] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:19.637 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:19.637 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:19.637 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:19.637 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.637 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.637 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.637 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.637 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.637 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.637 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.637 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:19.637 23:33:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:19.895 request: 00:09:19.895 { 00:09:19.895 "uuid": "29181786-68c7-4bc2-8ed8-3bb16bebb168", 00:09:19.895 "method": "bdev_lvol_get_lvstores", 00:09:19.895 "req_id": 1 00:09:19.895 } 00:09:19.895 Got JSON-RPC error response 00:09:19.895 response: 00:09:19.895 { 00:09:19.895 "code": -19, 00:09:19.895 "message": "No such device" 00:09:19.895 } 00:09:19.895 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:19.895 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:19.895 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:19.895 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:19.895 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:20.153 aio_bdev 00:09:20.153 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f089fa14-78c2-4ab2-a743-6d997c5555c3 00:09:20.153 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f089fa14-78c2-4ab2-a743-6d997c5555c3 00:09:20.153 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.153 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:20.153 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.153 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.153 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:20.410 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f089fa14-78c2-4ab2-a743-6d997c5555c3 -t 2000 00:09:20.699 [ 00:09:20.699 { 00:09:20.699 "name": "f089fa14-78c2-4ab2-a743-6d997c5555c3", 00:09:20.699 "aliases": [ 00:09:20.699 "lvs/lvol" 00:09:20.699 ], 00:09:20.699 "product_name": "Logical Volume", 00:09:20.699 "block_size": 4096, 00:09:20.699 "num_blocks": 38912, 00:09:20.699 "uuid": "f089fa14-78c2-4ab2-a743-6d997c5555c3", 00:09:20.699 "assigned_rate_limits": { 00:09:20.699 "rw_ios_per_sec": 0, 00:09:20.699 "rw_mbytes_per_sec": 0, 00:09:20.699 "r_mbytes_per_sec": 0, 00:09:20.699 "w_mbytes_per_sec": 0 00:09:20.699 }, 00:09:20.699 "claimed": false, 00:09:20.699 "zoned": false, 00:09:20.699 "supported_io_types": { 00:09:20.699 "read": true, 00:09:20.699 "write": true, 00:09:20.699 "unmap": true, 00:09:20.699 "flush": false, 00:09:20.699 "reset": true, 00:09:20.699 "nvme_admin": false, 00:09:20.699 "nvme_io": false, 00:09:20.699 "nvme_io_md": false, 00:09:20.699 "write_zeroes": true, 00:09:20.699 "zcopy": false, 00:09:20.699 "get_zone_info": false, 00:09:20.699 "zone_management": false, 00:09:20.699 "zone_append": false, 00:09:20.699 "compare": false, 00:09:20.699 "compare_and_write": false, 00:09:20.699 "abort": false, 00:09:20.699 "seek_hole": true, 00:09:20.699 "seek_data": true, 00:09:20.699 "copy": false, 00:09:20.699 "nvme_iov_md": false 00:09:20.699 }, 00:09:20.699 "driver_specific": { 00:09:20.699 "lvol": { 00:09:20.699 "lvol_store_uuid": "29181786-68c7-4bc2-8ed8-3bb16bebb168", 00:09:20.699 "base_bdev": "aio_bdev", 00:09:20.699 "thin_provision": false, 00:09:20.699 "num_allocated_clusters": 38, 00:09:20.699 "snapshot": false, 00:09:20.699 "clone": false, 00:09:20.699 "esnap_clone": false 00:09:20.699 } 00:09:20.699 } 00:09:20.699 } 00:09:20.699 ] 00:09:20.699 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:20.699 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:20.699 23:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:20.984 23:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:20.984 23:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:20.984 23:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:21.242 23:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:21.242 23:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f089fa14-78c2-4ab2-a743-6d997c5555c3 00:09:21.499 23:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 29181786-68c7-4bc2-8ed8-3bb16bebb168 00:09:21.757 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:22.014 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:22.272 00:09:22.272 real 0m19.354s 00:09:22.272 user 0m49.139s 00:09:22.272 sys 0m4.660s 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:22.272 ************************************ 00:09:22.272 END TEST lvs_grow_dirty 00:09:22.272 ************************************ 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:22.272 nvmf_trace.0 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.272 rmmod nvme_tcp 00:09:22.272 rmmod nvme_fabrics 00:09:22.272 rmmod nvme_keyring 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 75023 ']' 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 75023 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 75023 ']' 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 75023 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75023 00:09:22.272 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.273 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.273 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75023' 00:09:22.273 killing process with pid 75023 00:09:22.273 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 75023 00:09:22.273 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 75023 00:09:22.532 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:22.532 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:22.532 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:22.532 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:22.532 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:22.532 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:22.532 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:22.532 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.532 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:22.532 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.532 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.532 23:33:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.432 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.432 00:09:24.432 real 0m42.792s 00:09:24.432 user 1m12.560s 00:09:24.432 sys 0m8.462s 00:09:24.432 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.432 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.432 ************************************ 00:09:24.432 END TEST nvmf_lvs_grow 00:09:24.432 ************************************ 00:09:24.690 23:33:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:24.690 23:33:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.690 23:33:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.691 ************************************ 00:09:24.691 START TEST nvmf_bdev_io_wait 00:09:24.691 ************************************ 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:24.691 * Looking for test storage... 00:09:24.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:24.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.691 --rc genhtml_branch_coverage=1 00:09:24.691 --rc genhtml_function_coverage=1 00:09:24.691 --rc genhtml_legend=1 00:09:24.691 --rc geninfo_all_blocks=1 00:09:24.691 --rc geninfo_unexecuted_blocks=1 00:09:24.691 00:09:24.691 ' 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:24.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.691 --rc genhtml_branch_coverage=1 00:09:24.691 --rc genhtml_function_coverage=1 00:09:24.691 --rc genhtml_legend=1 00:09:24.691 --rc geninfo_all_blocks=1 00:09:24.691 --rc geninfo_unexecuted_blocks=1 00:09:24.691 00:09:24.691 ' 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:24.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.691 --rc genhtml_branch_coverage=1 00:09:24.691 --rc genhtml_function_coverage=1 00:09:24.691 --rc genhtml_legend=1 00:09:24.691 --rc geninfo_all_blocks=1 00:09:24.691 --rc geninfo_unexecuted_blocks=1 00:09:24.691 00:09:24.691 ' 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:24.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.691 --rc genhtml_branch_coverage=1 00:09:24.691 --rc genhtml_function_coverage=1 00:09:24.691 --rc genhtml_legend=1 00:09:24.691 --rc geninfo_all_blocks=1 00:09:24.691 --rc geninfo_unexecuted_blocks=1 00:09:24.691 00:09:24.691 ' 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.691 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.692 23:33:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.221 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:27.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:27.222 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:27.222 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:27.222 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.222 23:34:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:27.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:09:27.222 00:09:27.222 --- 10.0.0.2 ping statistics --- 00:09:27.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.222 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:09:27.222 00:09:27.222 --- 10.0.0.1 ping statistics --- 00:09:27.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.222 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=77572 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 77572 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 77572 ']' 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.222 [2024-11-19 23:34:01.139178] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:09:27.222 [2024-11-19 23:34:01.139257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.222 [2024-11-19 23:34:01.216258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.222 [2024-11-19 23:34:01.264168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.222 [2024-11-19 23:34:01.264224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.222 [2024-11-19 23:34:01.264254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.222 [2024-11-19 23:34:01.264266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.222 [2024-11-19 23:34:01.264276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.222 [2024-11-19 23:34:01.265894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.222 [2024-11-19 23:34:01.265961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.222 [2024-11-19 23:34:01.266011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.222 [2024-11-19 23:34:01.266013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.222 [2024-11-19 23:34:01.479421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.222 Malloc0 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.222 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.481 [2024-11-19 23:34:01.532837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=77718 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=77719 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=77722 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:27.481 { 00:09:27.481 "params": { 00:09:27.481 "name": "Nvme$subsystem", 00:09:27.481 "trtype": "$TEST_TRANSPORT", 00:09:27.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:27.481 "adrfam": "ipv4", 00:09:27.481 "trsvcid": "$NVMF_PORT", 00:09:27.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:27.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:27.481 "hdgst": ${hdgst:-false}, 00:09:27.481 "ddgst": ${ddgst:-false} 00:09:27.481 }, 00:09:27.481 "method": "bdev_nvme_attach_controller" 00:09:27.481 } 00:09:27.481 EOF 00:09:27.481 )") 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:27.481 { 00:09:27.481 "params": { 00:09:27.481 "name": "Nvme$subsystem", 00:09:27.481 "trtype": "$TEST_TRANSPORT", 00:09:27.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:27.481 "adrfam": "ipv4", 00:09:27.481 "trsvcid": "$NVMF_PORT", 00:09:27.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:27.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:27.481 "hdgst": ${hdgst:-false}, 00:09:27.481 "ddgst": ${ddgst:-false} 00:09:27.481 }, 00:09:27.481 "method": "bdev_nvme_attach_controller" 00:09:27.481 } 00:09:27.481 EOF 00:09:27.481 )") 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=77724 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:27.481 { 00:09:27.481 "params": { 00:09:27.481 "name": "Nvme$subsystem", 00:09:27.481 "trtype": "$TEST_TRANSPORT", 00:09:27.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:27.481 "adrfam": "ipv4", 00:09:27.481 "trsvcid": "$NVMF_PORT", 00:09:27.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:27.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:27.481 "hdgst": ${hdgst:-false}, 00:09:27.481 "ddgst": ${ddgst:-false} 00:09:27.481 }, 00:09:27.481 "method": "bdev_nvme_attach_controller" 00:09:27.481 } 00:09:27.481 EOF 00:09:27.481 )") 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:27.481 { 00:09:27.481 "params": { 00:09:27.481 "name": "Nvme$subsystem", 00:09:27.481 "trtype": "$TEST_TRANSPORT", 00:09:27.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:27.481 "adrfam": "ipv4", 00:09:27.481 "trsvcid": "$NVMF_PORT", 00:09:27.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:27.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:27.481 "hdgst": ${hdgst:-false}, 00:09:27.481 "ddgst": ${ddgst:-false} 00:09:27.481 }, 00:09:27.481 "method": "bdev_nvme_attach_controller" 00:09:27.481 } 00:09:27.481 EOF 00:09:27.481 )") 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 77718 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:27.481 "params": { 00:09:27.481 "name": "Nvme1", 00:09:27.481 "trtype": "tcp", 00:09:27.481 "traddr": "10.0.0.2", 00:09:27.481 "adrfam": "ipv4", 00:09:27.481 "trsvcid": "4420", 00:09:27.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:27.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:27.481 "hdgst": false, 00:09:27.481 "ddgst": false 00:09:27.481 }, 00:09:27.481 "method": "bdev_nvme_attach_controller" 00:09:27.481 }' 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:27.481 "params": { 00:09:27.481 "name": "Nvme1", 00:09:27.481 "trtype": "tcp", 00:09:27.481 "traddr": "10.0.0.2", 00:09:27.481 "adrfam": "ipv4", 00:09:27.481 "trsvcid": "4420", 00:09:27.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:27.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:27.481 "hdgst": false, 00:09:27.481 "ddgst": false 00:09:27.481 }, 00:09:27.481 "method": "bdev_nvme_attach_controller" 00:09:27.481 }' 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:27.481 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:27.481 "params": { 00:09:27.481 "name": "Nvme1", 00:09:27.481 "trtype": "tcp", 00:09:27.481 "traddr": "10.0.0.2", 00:09:27.481 "adrfam": "ipv4", 00:09:27.481 "trsvcid": "4420", 00:09:27.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:27.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:27.481 "hdgst": false, 00:09:27.482 "ddgst": false 00:09:27.482 }, 00:09:27.482 "method": "bdev_nvme_attach_controller" 00:09:27.482 }' 00:09:27.482 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:27.482 23:34:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:27.482 "params": { 00:09:27.482 "name": "Nvme1", 00:09:27.482 "trtype": "tcp", 00:09:27.482 "traddr": "10.0.0.2", 00:09:27.482 "adrfam": "ipv4", 00:09:27.482 "trsvcid": "4420", 00:09:27.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:27.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:27.482 "hdgst": false, 00:09:27.482 "ddgst": false 00:09:27.482 }, 00:09:27.482 "method": "bdev_nvme_attach_controller" 00:09:27.482 }' 00:09:27.482 [2024-11-19 23:34:01.583207] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:09:27.482 [2024-11-19 23:34:01.583211] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:09:27.482 [2024-11-19 23:34:01.583211] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:09:27.482 [2024-11-19 23:34:01.583211] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:09:27.482 [2024-11-19 23:34:01.583285] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:27.482 [2024-11-19 23:34:01.583304] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 23:34:01.583306] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 23:34:01.583307] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:27.482 --proc-type=auto ] 00:09:27.482 --proc-type=auto ] 00:09:27.482 [2024-11-19 23:34:01.763170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.740 [2024-11-19 23:34:01.806167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:27.740 [2024-11-19 23:34:01.836404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.740 [2024-11-19 23:34:01.872667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:27.740 [2024-11-19 23:34:01.931785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.740 [2024-11-19 23:34:01.973400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:27.740 [2024-11-19 23:34:02.030030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.998 [2024-11-19 23:34:02.073263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:27.998 Running I/O for 1 seconds... 00:09:27.998 Running I/O for 1 seconds... 00:09:27.998 Running I/O for 1 seconds... 00:09:28.304 Running I/O for 1 seconds... 00:09:29.129 6416.00 IOPS, 25.06 MiB/s 00:09:29.129 Latency(us) 00:09:29.129 [2024-11-19T22:34:03.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.129 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:29.129 Nvme1n1 : 1.06 6168.67 24.10 0.00 0.00 19804.55 9369.22 60584.39 00:09:29.129 [2024-11-19T22:34:03.441Z] =================================================================================================================== 00:09:29.129 [2024-11-19T22:34:03.441Z] Total : 6168.67 24.10 0.00 0.00 19804.55 9369.22 60584.39 00:09:29.129 199816.00 IOPS, 780.53 MiB/s 00:09:29.129 Latency(us) 00:09:29.129 [2024-11-19T22:34:03.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.129 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:29.129 Nvme1n1 : 1.00 199441.47 779.07 0.00 0.00 638.53 292.79 1856.85 00:09:29.129 [2024-11-19T22:34:03.441Z] =================================================================================================================== 00:09:29.129 [2024-11-19T22:34:03.441Z] Total : 199441.47 779.07 0.00 0.00 638.53 292.79 1856.85 00:09:29.129 6098.00 IOPS, 23.82 MiB/s 00:09:29.129 Latency(us) 00:09:29.129 [2024-11-19T22:34:03.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.129 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:29.129 Nvme1n1 : 1.01 6195.76 24.20 0.00 0.00 20581.91 5412.79 40195.41 00:09:29.129 [2024-11-19T22:34:03.441Z] =================================================================================================================== 00:09:29.129 [2024-11-19T22:34:03.441Z] Total : 6195.76 24.20 0.00 0.00 20581.91 5412.79 40195.41 00:09:29.129 10137.00 IOPS, 39.60 MiB/s 00:09:29.129 Latency(us) 00:09:29.129 [2024-11-19T22:34:03.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.129 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:29.129 Nvme1n1 : 1.01 10202.68 39.85 0.00 0.00 12498.37 5364.24 22719.15 00:09:29.129 [2024-11-19T22:34:03.441Z] =================================================================================================================== 00:09:29.129 [2024-11-19T22:34:03.442Z] Total : 10202.68 39.85 0.00 0.00 12498.37 5364.24 22719.15 00:09:29.130 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 77719 00:09:29.130 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 77722 00:09:29.130 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 77724 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:29.388 rmmod nvme_tcp 00:09:29.388 rmmod nvme_fabrics 00:09:29.388 rmmod nvme_keyring 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 77572 ']' 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 77572 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 77572 ']' 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 77572 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77572 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77572' 00:09:29.388 killing process with pid 77572 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 77572 00:09:29.388 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 77572 00:09:29.646 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:29.646 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:29.646 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:29.646 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:29.646 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:29.646 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:29.646 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:29.646 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:29.646 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:29.646 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.646 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.646 23:34:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.548 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:31.548 00:09:31.548 real 0m7.024s 00:09:31.548 user 0m15.397s 00:09:31.548 sys 0m3.527s 00:09:31.548 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.548 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.548 ************************************ 00:09:31.548 END TEST nvmf_bdev_io_wait 00:09:31.548 ************************************ 00:09:31.548 23:34:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:31.548 23:34:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:31.548 23:34:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.548 23:34:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:31.548 ************************************ 00:09:31.548 START TEST nvmf_queue_depth 00:09:31.548 ************************************ 00:09:31.548 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:31.808 * Looking for test storage... 00:09:31.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:31.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.808 --rc genhtml_branch_coverage=1 00:09:31.808 --rc genhtml_function_coverage=1 00:09:31.808 --rc genhtml_legend=1 00:09:31.808 --rc geninfo_all_blocks=1 00:09:31.808 --rc geninfo_unexecuted_blocks=1 00:09:31.808 00:09:31.808 ' 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:31.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.808 --rc genhtml_branch_coverage=1 00:09:31.808 --rc genhtml_function_coverage=1 00:09:31.808 --rc genhtml_legend=1 00:09:31.808 --rc geninfo_all_blocks=1 00:09:31.808 --rc geninfo_unexecuted_blocks=1 00:09:31.808 00:09:31.808 ' 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:31.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.808 --rc genhtml_branch_coverage=1 00:09:31.808 --rc genhtml_function_coverage=1 00:09:31.808 --rc genhtml_legend=1 00:09:31.808 --rc geninfo_all_blocks=1 00:09:31.808 --rc geninfo_unexecuted_blocks=1 00:09:31.808 00:09:31.808 ' 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:31.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.808 --rc genhtml_branch_coverage=1 00:09:31.808 --rc genhtml_function_coverage=1 00:09:31.808 --rc genhtml_legend=1 00:09:31.808 --rc geninfo_all_blocks=1 00:09:31.808 --rc geninfo_unexecuted_blocks=1 00:09:31.808 00:09:31.808 ' 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.808 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:31.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.809 23:34:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.809 23:34:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:31.809 23:34:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:31.809 23:34:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:31.809 23:34:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:33.709 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:33.709 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:33.709 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:33.709 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.709 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.710 23:34:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:09:33.968 00:09:33.968 --- 10.0.0.2 ping statistics --- 00:09:33.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.968 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:09:33.968 00:09:33.968 --- 10.0.0.1 ping statistics --- 00:09:33.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.968 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.968 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=79954 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 79954 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 79954 ']' 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.969 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.969 [2024-11-19 23:34:08.184158] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:09:33.969 [2024-11-19 23:34:08.184253] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.969 [2024-11-19 23:34:08.260763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.227 [2024-11-19 23:34:08.308140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.227 [2024-11-19 23:34:08.308192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.227 [2024-11-19 23:34:08.308221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.227 [2024-11-19 23:34:08.308233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.227 [2024-11-19 23:34:08.308242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.227 [2024-11-19 23:34:08.308821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.227 [2024-11-19 23:34:08.443834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.227 Malloc0 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.227 [2024-11-19 23:34:08.492388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=79979 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 79979 /var/tmp/bdevperf.sock 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 79979 ']' 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:34.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.227 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.485 [2024-11-19 23:34:08.542257] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:09:34.486 [2024-11-19 23:34:08.542337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79979 ] 00:09:34.486 [2024-11-19 23:34:08.612931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.486 [2024-11-19 23:34:08.661630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.486 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.486 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:34.486 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:34.486 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.486 23:34:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.743 NVMe0n1 00:09:34.743 23:34:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.743 23:34:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:35.001 Running I/O for 10 seconds... 00:09:36.871 7543.00 IOPS, 29.46 MiB/s [2024-11-19T22:34:12.556Z] 7902.00 IOPS, 30.87 MiB/s [2024-11-19T22:34:13.489Z] 8055.67 IOPS, 31.47 MiB/s [2024-11-19T22:34:14.423Z] 8129.50 IOPS, 31.76 MiB/s [2024-11-19T22:34:15.357Z] 8173.20 IOPS, 31.93 MiB/s [2024-11-19T22:34:16.292Z] 8177.50 IOPS, 31.94 MiB/s [2024-11-19T22:34:17.226Z] 8182.86 IOPS, 31.96 MiB/s [2024-11-19T22:34:18.162Z] 8186.12 IOPS, 31.98 MiB/s [2024-11-19T22:34:19.538Z] 8192.56 IOPS, 32.00 MiB/s [2024-11-19T22:34:19.538Z] 8200.20 IOPS, 32.03 MiB/s 00:09:45.226 Latency(us) 00:09:45.226 [2024-11-19T22:34:19.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.226 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:45.226 Verification LBA range: start 0x0 length 0x4000 00:09:45.226 NVMe0n1 : 10.07 8242.07 32.20 0.00 0.00 123654.45 7864.32 80390.83 00:09:45.226 [2024-11-19T22:34:19.538Z] =================================================================================================================== 00:09:45.226 [2024-11-19T22:34:19.538Z] Total : 8242.07 32.20 0.00 0.00 123654.45 7864.32 80390.83 00:09:45.226 { 00:09:45.226 "results": [ 00:09:45.226 { 00:09:45.226 "job": "NVMe0n1", 00:09:45.226 "core_mask": "0x1", 00:09:45.226 "workload": "verify", 00:09:45.226 "status": "finished", 00:09:45.226 "verify_range": { 00:09:45.226 "start": 0, 00:09:45.226 "length": 16384 00:09:45.226 }, 00:09:45.226 "queue_depth": 1024, 00:09:45.226 "io_size": 4096, 00:09:45.226 "runtime": 10.068589, 00:09:45.226 "iops": 8242.068476526354, 00:09:45.226 "mibps": 32.19557998643107, 00:09:45.226 "io_failed": 0, 00:09:45.226 "io_timeout": 0, 00:09:45.226 "avg_latency_us": 123654.44738912678, 00:09:45.226 "min_latency_us": 7864.32, 00:09:45.226 "max_latency_us": 80390.82666666666 00:09:45.226 } 00:09:45.226 ], 00:09:45.226 "core_count": 1 00:09:45.226 } 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 79979 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 79979 ']' 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 79979 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79979 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79979' 00:09:45.226 killing process with pid 79979 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 79979 00:09:45.226 Received shutdown signal, test time was about 10.000000 seconds 00:09:45.226 00:09:45.226 Latency(us) 00:09:45.226 [2024-11-19T22:34:19.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.226 [2024-11-19T22:34:19.538Z] =================================================================================================================== 00:09:45.226 [2024-11-19T22:34:19.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 79979 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.226 rmmod nvme_tcp 00:09:45.226 rmmod nvme_fabrics 00:09:45.226 rmmod nvme_keyring 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 79954 ']' 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 79954 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 79954 ']' 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 79954 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.226 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79954 00:09:45.484 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:45.484 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:45.484 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79954' 00:09:45.484 killing process with pid 79954 00:09:45.484 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 79954 00:09:45.484 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 79954 00:09:45.743 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.743 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.743 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.743 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:45.743 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:45.743 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.743 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.743 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.743 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.743 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.743 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.743 23:34:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.643 23:34:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.643 00:09:47.643 real 0m16.009s 00:09:47.643 user 0m22.586s 00:09:47.643 sys 0m3.052s 00:09:47.643 23:34:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.643 23:34:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.643 ************************************ 00:09:47.643 END TEST nvmf_queue_depth 00:09:47.643 ************************************ 00:09:47.643 23:34:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:47.643 23:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.643 23:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.643 23:34:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.643 ************************************ 00:09:47.643 START TEST nvmf_target_multipath 00:09:47.643 ************************************ 00:09:47.643 23:34:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:47.643 * Looking for test storage... 00:09:47.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.643 23:34:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.643 23:34:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.643 23:34:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.902 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.903 --rc genhtml_branch_coverage=1 00:09:47.903 --rc genhtml_function_coverage=1 00:09:47.903 --rc genhtml_legend=1 00:09:47.903 --rc geninfo_all_blocks=1 00:09:47.903 --rc geninfo_unexecuted_blocks=1 00:09:47.903 00:09:47.903 ' 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.903 --rc genhtml_branch_coverage=1 00:09:47.903 --rc genhtml_function_coverage=1 00:09:47.903 --rc genhtml_legend=1 00:09:47.903 --rc geninfo_all_blocks=1 00:09:47.903 --rc geninfo_unexecuted_blocks=1 00:09:47.903 00:09:47.903 ' 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.903 --rc genhtml_branch_coverage=1 00:09:47.903 --rc genhtml_function_coverage=1 00:09:47.903 --rc genhtml_legend=1 00:09:47.903 --rc geninfo_all_blocks=1 00:09:47.903 --rc geninfo_unexecuted_blocks=1 00:09:47.903 00:09:47.903 ' 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.903 --rc genhtml_branch_coverage=1 00:09:47.903 --rc genhtml_function_coverage=1 00:09:47.903 --rc genhtml_legend=1 00:09:47.903 --rc geninfo_all_blocks=1 00:09:47.903 --rc geninfo_unexecuted_blocks=1 00:09:47.903 00:09:47.903 ' 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.903 23:34:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.805 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.064 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.064 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.064 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.064 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.064 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.064 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.064 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:50.064 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.064 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:50.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:50.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:50.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:50.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:50.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:09:50.065 00:09:50.065 --- 10.0.0.2 ping statistics --- 00:09:50.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.065 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:09:50.065 00:09:50.065 --- 10.0.0.1 ping statistics --- 00:09:50.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.065 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:50.065 only one NIC for nvmf test 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.065 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.065 rmmod nvme_tcp 00:09:50.325 rmmod nvme_fabrics 00:09:50.325 rmmod nvme_keyring 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.325 23:34:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.227 00:09:52.227 real 0m4.598s 00:09:52.227 user 0m0.905s 00:09:52.227 sys 0m1.621s 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:52.227 ************************************ 00:09:52.227 END TEST nvmf_target_multipath 00:09:52.227 ************************************ 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.227 23:34:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.487 ************************************ 00:09:52.487 START TEST nvmf_zcopy 00:09:52.487 ************************************ 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:52.487 * Looking for test storage... 00:09:52.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.487 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.488 --rc genhtml_branch_coverage=1 00:09:52.488 --rc genhtml_function_coverage=1 00:09:52.488 --rc genhtml_legend=1 00:09:52.488 --rc geninfo_all_blocks=1 00:09:52.488 --rc geninfo_unexecuted_blocks=1 00:09:52.488 00:09:52.488 ' 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.488 --rc genhtml_branch_coverage=1 00:09:52.488 --rc genhtml_function_coverage=1 00:09:52.488 --rc genhtml_legend=1 00:09:52.488 --rc geninfo_all_blocks=1 00:09:52.488 --rc geninfo_unexecuted_blocks=1 00:09:52.488 00:09:52.488 ' 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.488 --rc genhtml_branch_coverage=1 00:09:52.488 --rc genhtml_function_coverage=1 00:09:52.488 --rc genhtml_legend=1 00:09:52.488 --rc geninfo_all_blocks=1 00:09:52.488 --rc geninfo_unexecuted_blocks=1 00:09:52.488 00:09:52.488 ' 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.488 --rc genhtml_branch_coverage=1 00:09:52.488 --rc genhtml_function_coverage=1 00:09:52.488 --rc genhtml_legend=1 00:09:52.488 --rc geninfo_all_blocks=1 00:09:52.488 --rc geninfo_unexecuted_blocks=1 00:09:52.488 00:09:52.488 ' 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:52.488 23:34:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.391 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.391 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:54.391 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:54.391 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:54.391 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:54.392 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:54.392 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:54.392 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:54.392 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.392 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:54.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:09:54.651 00:09:54.651 --- 10.0.0.2 ping statistics --- 00:09:54.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.651 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:09:54.651 00:09:54.651 --- 10.0.0.1 ping statistics --- 00:09:54.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.651 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=85184 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 85184 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 85184 ']' 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.651 23:34:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.651 [2024-11-19 23:34:28.899220] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:09:54.651 [2024-11-19 23:34:28.899324] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.910 [2024-11-19 23:34:28.978976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.910 [2024-11-19 23:34:29.030575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.910 [2024-11-19 23:34:29.030640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.910 [2024-11-19 23:34:29.030657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.910 [2024-11-19 23:34:29.030672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.910 [2024-11-19 23:34:29.030684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.910 [2024-11-19 23:34:29.031367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.910 [2024-11-19 23:34:29.193528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.910 [2024-11-19 23:34:29.209757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.910 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:55.168 malloc0 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:55.168 { 00:09:55.168 "params": { 00:09:55.168 "name": "Nvme$subsystem", 00:09:55.168 "trtype": "$TEST_TRANSPORT", 00:09:55.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:55.168 "adrfam": "ipv4", 00:09:55.168 "trsvcid": "$NVMF_PORT", 00:09:55.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:55.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:55.168 "hdgst": ${hdgst:-false}, 00:09:55.168 "ddgst": ${ddgst:-false} 00:09:55.168 }, 00:09:55.168 "method": "bdev_nvme_attach_controller" 00:09:55.168 } 00:09:55.168 EOF 00:09:55.168 )") 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:55.168 23:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:55.168 "params": { 00:09:55.168 "name": "Nvme1", 00:09:55.168 "trtype": "tcp", 00:09:55.168 "traddr": "10.0.0.2", 00:09:55.168 "adrfam": "ipv4", 00:09:55.168 "trsvcid": "4420", 00:09:55.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:55.168 "hdgst": false, 00:09:55.168 "ddgst": false 00:09:55.168 }, 00:09:55.168 "method": "bdev_nvme_attach_controller" 00:09:55.168 }' 00:09:55.168 [2024-11-19 23:34:29.288326] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:09:55.168 [2024-11-19 23:34:29.288427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85209 ] 00:09:55.168 [2024-11-19 23:34:29.359778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.168 [2024-11-19 23:34:29.409244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.774 Running I/O for 10 seconds... 00:09:57.643 5545.00 IOPS, 43.32 MiB/s [2024-11-19T22:34:32.889Z] 5640.00 IOPS, 44.06 MiB/s [2024-11-19T22:34:33.823Z] 5653.67 IOPS, 44.17 MiB/s [2024-11-19T22:34:35.197Z] 5661.75 IOPS, 44.23 MiB/s [2024-11-19T22:34:36.130Z] 5661.80 IOPS, 44.23 MiB/s [2024-11-19T22:34:37.065Z] 5662.50 IOPS, 44.24 MiB/s [2024-11-19T22:34:37.997Z] 5664.57 IOPS, 44.25 MiB/s [2024-11-19T22:34:38.931Z] 5664.50 IOPS, 44.25 MiB/s [2024-11-19T22:34:39.862Z] 5659.33 IOPS, 44.21 MiB/s [2024-11-19T22:34:39.862Z] 5660.00 IOPS, 44.22 MiB/s 00:10:05.550 Latency(us) 00:10:05.550 [2024-11-19T22:34:39.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.550 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:05.550 Verification LBA range: start 0x0 length 0x1000 00:10:05.550 Nvme1n1 : 10.02 5663.26 44.24 0.00 0.00 22540.64 3470.98 32428.18 00:10:05.550 [2024-11-19T22:34:39.862Z] =================================================================================================================== 00:10:05.550 [2024-11-19T22:34:39.862Z] Total : 5663.26 44.24 0.00 0.00 22540.64 3470.98 32428.18 00:10:05.809 23:34:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=86542 00:10:05.809 23:34:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:05.809 23:34:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:05.809 23:34:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:05.809 23:34:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:05.809 23:34:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:05.809 23:34:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:05.809 23:34:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:05.809 23:34:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:05.809 { 00:10:05.809 "params": { 00:10:05.809 "name": "Nvme$subsystem", 00:10:05.809 "trtype": "$TEST_TRANSPORT", 00:10:05.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.810 "adrfam": "ipv4", 00:10:05.810 "trsvcid": "$NVMF_PORT", 00:10:05.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.810 "hdgst": ${hdgst:-false}, 00:10:05.810 "ddgst": ${ddgst:-false} 00:10:05.810 }, 00:10:05.810 "method": "bdev_nvme_attach_controller" 00:10:05.810 } 00:10:05.810 EOF 00:10:05.810 )") 00:10:05.810 23:34:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:05.810 23:34:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:05.810 [2024-11-19 23:34:40.005850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.005895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 23:34:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:05.810 23:34:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:05.810 "params": { 00:10:05.810 "name": "Nvme1", 00:10:05.810 "trtype": "tcp", 00:10:05.810 "traddr": "10.0.0.2", 00:10:05.810 "adrfam": "ipv4", 00:10:05.810 "trsvcid": "4420", 00:10:05.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.810 "hdgst": false, 00:10:05.810 "ddgst": false 00:10:05.810 }, 00:10:05.810 "method": "bdev_nvme_attach_controller" 00:10:05.810 }' 00:10:05.810 [2024-11-19 23:34:40.013807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.013837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.021825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.021852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.029841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.029865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.037869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.037896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.045896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.045934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.050700] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:10:05.810 [2024-11-19 23:34:40.050787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86542 ] 00:10:05.810 [2024-11-19 23:34:40.053915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.053942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.061935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.061961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.069958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.069983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.077979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.078004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.086002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.086027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.094023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.094048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.102046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.102079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.110077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.110124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.810 [2024-11-19 23:34:40.118100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.810 [2024-11-19 23:34:40.118140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.126135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.126158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.130896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.069 [2024-11-19 23:34:40.134153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.134175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.142206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.142241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.150217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.150247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.158205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.158228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.166254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.166276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.174249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.174272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.182270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.182293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.183450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.069 [2024-11-19 23:34:40.190288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.190311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.198328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.198369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.206384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.206417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.214409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.214448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.222441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.222480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.230465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.230504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.238485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.238524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.246507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.246546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.254497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.254522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.262536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.262574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.270562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.270599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.278587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.278623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.286580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.286605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.294601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.294626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.303007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.303033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.311020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.311046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.319031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.319059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.327054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.327103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.335083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.335128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.343116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.343142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.351135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.069 [2024-11-19 23:34:40.351157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.069 [2024-11-19 23:34:40.359159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.070 [2024-11-19 23:34:40.359182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.070 [2024-11-19 23:34:40.367172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.070 [2024-11-19 23:34:40.367195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.070 [2024-11-19 23:34:40.375281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.070 [2024-11-19 23:34:40.375307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.383285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.383310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.391300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.391322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.399320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.399342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.407342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.407377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.415385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.415410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.423398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.423422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.431440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.431467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.439471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.439496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.447480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.447506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.455500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.455526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.463523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.463548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.471553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.471580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.479573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.479603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.487624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.487652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.495632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.495660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 Running I/O for 5 seconds... 00:10:06.328 [2024-11-19 23:34:40.503661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.503701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.518701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.518744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.531713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.531741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.544753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.328 [2024-11-19 23:34:40.544782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.328 [2024-11-19 23:34:40.557240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.329 [2024-11-19 23:34:40.557267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.329 [2024-11-19 23:34:40.570032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.329 [2024-11-19 23:34:40.570084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.329 [2024-11-19 23:34:40.582368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.329 [2024-11-19 23:34:40.582410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.329 [2024-11-19 23:34:40.594914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.329 [2024-11-19 23:34:40.594941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.329 [2024-11-19 23:34:40.607250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.329 [2024-11-19 23:34:40.607280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.329 [2024-11-19 23:34:40.619791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.329 [2024-11-19 23:34:40.619820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.329 [2024-11-19 23:34:40.632140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.329 [2024-11-19 23:34:40.632183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.644988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.645016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.657325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.657353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.669792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.669820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.682247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.682275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.694969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.694996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.707596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.707624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.719905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.719932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.732449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.732476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.745044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.745079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.757574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.757602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.770232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.770260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.782966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.782993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.795752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.795779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.807779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.807807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.819915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.819944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.832309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.832337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.843942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.843969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.856557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.856584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.868957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.869004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.587 [2024-11-19 23:34:40.881625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.587 [2024-11-19 23:34:40.881652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.588 [2024-11-19 23:34:40.894330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.588 [2024-11-19 23:34:40.894369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:40.906721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:40.906750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:40.919127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:40.919155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:40.931317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:40.931369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:40.944461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:40.944493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:40.957265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:40.957293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:40.970417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:40.970444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:40.982856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:40.982884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:40.995302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:40.995331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:41.007319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:41.007372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:41.020197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:41.020225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:41.032464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:41.032506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:41.044537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:41.044581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:41.056820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:41.056847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:41.069000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:41.069028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:41.081197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:41.081225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:41.093297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:41.093325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:41.107373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:41.107401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:41.118896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:41.118938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:41.131843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:41.131869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.846 [2024-11-19 23:34:41.144276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.846 [2024-11-19 23:34:41.144303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.156977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.157005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.169568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.169595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.182163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.182190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.195152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.195180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.208011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.208057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.220731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.220757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.233516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.233543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.246026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.246077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.258241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.258268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.270633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.270674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.282655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.282682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.295369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.295396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.308119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.308147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.320606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.320648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.332995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.333039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.345628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.345671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.358001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.358047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.370300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.370328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.382840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.382867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.395041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.395098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.105 [2024-11-19 23:34:41.408054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.105 [2024-11-19 23:34:41.408117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.420164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.420192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.432403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.432430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.444824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.444851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.457245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.457273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.469236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.469277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.481534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.481560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.493634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.493676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 10102.00 IOPS, 78.92 MiB/s [2024-11-19T22:34:41.675Z] [2024-11-19 23:34:41.505867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.505894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.518769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.518796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.531150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.531193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.543907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.543934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.556627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.556654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.568815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.568842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.581248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.581276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.593431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.593458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.605321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.605364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.363 [2024-11-19 23:34:41.617936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.363 [2024-11-19 23:34:41.617967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.364 [2024-11-19 23:34:41.630307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.364 [2024-11-19 23:34:41.630335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.364 [2024-11-19 23:34:41.641961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.364 [2024-11-19 23:34:41.641999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.364 [2024-11-19 23:34:41.656159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.364 [2024-11-19 23:34:41.656187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.364 [2024-11-19 23:34:41.668207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.364 [2024-11-19 23:34:41.668235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.680859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.680887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.693177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.693205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.705201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.705229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.717702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.717731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.730839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.730867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.743256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.743285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.755319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.755348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.767657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.767701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.779653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.779681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.792181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.792208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.804951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.804997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.817332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.817360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.830238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.830266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.842669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.842711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.855209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.855237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.867963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.867990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.880583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.880633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.893782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.893810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.906254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.906281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.918720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.918747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.622 [2024-11-19 23:34:41.930990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.622 [2024-11-19 23:34:41.931020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:41.943312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:41.943339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:41.955484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:41.955527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:41.967252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:41.967280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:41.979471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:41.979499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:41.991847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:41.991874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.004360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.004402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.017277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.017319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.029512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.029539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.042457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.042484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.054945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.054972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.067899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.067926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.080357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.080386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.092906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.092934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.106006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.106033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.118738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.118765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.131408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.131436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.143956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.143983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.156673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.156701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.169634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.169661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.880 [2024-11-19 23:34:42.181813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.880 [2024-11-19 23:34:42.181855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.194243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.194272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.206115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.206143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.218802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.218829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.231885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.231912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.243780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.243807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.256181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.256209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.268243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.268271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.280865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.280893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.293445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.293471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.305913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.305939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.318237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.318265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.330441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.330468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.342845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.139 [2024-11-19 23:34:42.342872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.139 [2024-11-19 23:34:42.355401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.140 [2024-11-19 23:34:42.355428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.140 [2024-11-19 23:34:42.368236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.140 [2024-11-19 23:34:42.368264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.140 [2024-11-19 23:34:42.380575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.140 [2024-11-19 23:34:42.380603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.140 [2024-11-19 23:34:42.392793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.140 [2024-11-19 23:34:42.392820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.140 [2024-11-19 23:34:42.405144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.140 [2024-11-19 23:34:42.405172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.140 [2024-11-19 23:34:42.417485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.140 [2024-11-19 23:34:42.417512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.140 [2024-11-19 23:34:42.429769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.140 [2024-11-19 23:34:42.429796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.140 [2024-11-19 23:34:42.442119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.140 [2024-11-19 23:34:42.442147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.454218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.454245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.466689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.466717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.479088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.479133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.491837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.491863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 10162.00 IOPS, 79.39 MiB/s [2024-11-19T22:34:42.710Z] [2024-11-19 23:34:42.504352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.504379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.516796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.516823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.529470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.529512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.541849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.541875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.554532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.554563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.567539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.567566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.580449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.580476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.593335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.593378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.606247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.606275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.619273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.619302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.632063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.632119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.644956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.644983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.657531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.657559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.670177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.670206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.682173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.682201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.694994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.695025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.398 [2024-11-19 23:34:42.707591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.398 [2024-11-19 23:34:42.707618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.719771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.719797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.732173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.732200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.744856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.744882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.757812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.757853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.770364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.770390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.782983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.783014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.795062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.795098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.806955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.806997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.819433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.819473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.831813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.831841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.844127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.844154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.857102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.857130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.869562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.869591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.881723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.881752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.893588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.893616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.905539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.905567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.918037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.918065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.930452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.930479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.942661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.942688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.954761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.954806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.657 [2024-11-19 23:34:42.966673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.657 [2024-11-19 23:34:42.966701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:42.979060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:42.979115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:42.991253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:42.991281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.003743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.003771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.016421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.016449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.029535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.029563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.041813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.041841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.054368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.054406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.066713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.066741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.078929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.078956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.091775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.091818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.103803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.103831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.115915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.115958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.128595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.128622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.141395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.141422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.153978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.154020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.167257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.167299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.179662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.179689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.192199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.192228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.204729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.204757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.915 [2024-11-19 23:34:43.217180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.915 [2024-11-19 23:34:43.217208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.173 [2024-11-19 23:34:43.229563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.173 [2024-11-19 23:34:43.229590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.173 [2024-11-19 23:34:43.241721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.173 [2024-11-19 23:34:43.241748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.173 [2024-11-19 23:34:43.253981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.173 [2024-11-19 23:34:43.254008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.173 [2024-11-19 23:34:43.266505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.173 [2024-11-19 23:34:43.266532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.173 [2024-11-19 23:34:43.279311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.173 [2024-11-19 23:34:43.279339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.291765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.291825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.304627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.304658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.317528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.317570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.330250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.330293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.342811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.342838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.355297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.355325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.367952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.367998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.380686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.380712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.392701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.392727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.405207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.405235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.417678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.417705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.430364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.430406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.443438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.443465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.456170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.456198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.469053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.469089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.174 [2024-11-19 23:34:43.480856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.174 [2024-11-19 23:34:43.480883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.493467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.493494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.505018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.505044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 10164.33 IOPS, 79.41 MiB/s [2024-11-19T22:34:43.745Z] [2024-11-19 23:34:43.517197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.517239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.529670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.529696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.542341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.542384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.554281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.554309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.566476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.566517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.578450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.578494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.591531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.591576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.603991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.604018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.616571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.616613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.629200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.629231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.641551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.641578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.653663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.653689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.665588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.665616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.678566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.678593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.690634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.690660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.702909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.702936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.715175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.715212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.727162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.727203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.433 [2024-11-19 23:34:43.739448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.433 [2024-11-19 23:34:43.739476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.751569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.751600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.764200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.764229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.776859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.776902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.789196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.789224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.801548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.801575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.814554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.814592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.826915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.826956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.839883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.839910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.852509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.852536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.864780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.864806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.877545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.877572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.890263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.890306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.902627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.902654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.915461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.915489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.927718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.927745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.940253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.940297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.952874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.952900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.965194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.965236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.977495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.977522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:43.989153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:43.989195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.692 [2024-11-19 23:34:44.001238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.692 [2024-11-19 23:34:44.001265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.950 [2024-11-19 23:34:44.013580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.950 [2024-11-19 23:34:44.013607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.950 [2024-11-19 23:34:44.025790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.950 [2024-11-19 23:34:44.025817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.950 [2024-11-19 23:34:44.038357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.950 [2024-11-19 23:34:44.038401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.950 [2024-11-19 23:34:44.051309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.950 [2024-11-19 23:34:44.051336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.950 [2024-11-19 23:34:44.064044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.950 [2024-11-19 23:34:44.064083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.950 [2024-11-19 23:34:44.076522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.950 [2024-11-19 23:34:44.076549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.950 [2024-11-19 23:34:44.088895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.950 [2024-11-19 23:34:44.088922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.950 [2024-11-19 23:34:44.101740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.950 [2024-11-19 23:34:44.101767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.950 [2024-11-19 23:34:44.114727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.951 [2024-11-19 23:34:44.114754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.951 [2024-11-19 23:34:44.127683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.951 [2024-11-19 23:34:44.127710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.951 [2024-11-19 23:34:44.139993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.951 [2024-11-19 23:34:44.140025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.951 [2024-11-19 23:34:44.152616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.951 [2024-11-19 23:34:44.152643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.951 [2024-11-19 23:34:44.165043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.951 [2024-11-19 23:34:44.165083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.951 [2024-11-19 23:34:44.177999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.951 [2024-11-19 23:34:44.178026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.951 [2024-11-19 23:34:44.190246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.951 [2024-11-19 23:34:44.190273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.951 [2024-11-19 23:34:44.202377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.951 [2024-11-19 23:34:44.202405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.951 [2024-11-19 23:34:44.214297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.951 [2024-11-19 23:34:44.214339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.951 [2024-11-19 23:34:44.226215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.951 [2024-11-19 23:34:44.226243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.951 [2024-11-19 23:34:44.238621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.951 [2024-11-19 23:34:44.238666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.951 [2024-11-19 23:34:44.251160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.951 [2024-11-19 23:34:44.251188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.262828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.262856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.274629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.274657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.286854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.286882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.299149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.299177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.311747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.311775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.324176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.324204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.336402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.336445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.348898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.348927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.361645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.361690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.373997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.374024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.386450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.386493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.398911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.398952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.411605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.411632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.424364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.424406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.436900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.436927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.449740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.449767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.462146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.462197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.474651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.474679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.487273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.487301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 [2024-11-19 23:34:44.499675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.499703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.210 10182.00 IOPS, 79.55 MiB/s [2024-11-19T22:34:44.522Z] [2024-11-19 23:34:44.512683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.210 [2024-11-19 23:34:44.512710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.525139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.525182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.537540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.537568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.550231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.550260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.563050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.563099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.575549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.575590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.588090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.588118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.600243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.600285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.613168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.613197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.626163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.626191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.638410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.638453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.651310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.651338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.663971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.663997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.676238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.676265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.688583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.688611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.701012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.701076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.713663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.713690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.725928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.725956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.738408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.738435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.750339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.750385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.762762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.762788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.469 [2024-11-19 23:34:44.775200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.469 [2024-11-19 23:34:44.775227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.787412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.787438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.800143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.800171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.812548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.812575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.824840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.824867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.837216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.837244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.850378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.850405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.862598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.862629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.874500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.874527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.886908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.886950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.899791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.899818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.911907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.911934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.924024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.924055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.936085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.936122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.948312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.948340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.960538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.960565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.972774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.972801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.984883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.984909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:44.998005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:44.998032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:45.010565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:45.010593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:45.023079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:45.023108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.728 [2024-11-19 23:34:45.035172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.728 [2024-11-19 23:34:45.035200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.047613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.047640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.059789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.059831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.072520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.072547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.085330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.085375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.098195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.098222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.110875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.110902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.123650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.123677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.136321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.136348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.149155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.149182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.161500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.161527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.173881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.173933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.186301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.186329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.986 [2024-11-19 23:34:45.199006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.986 [2024-11-19 23:34:45.199032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.987 [2024-11-19 23:34:45.211400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.987 [2024-11-19 23:34:45.211427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.987 [2024-11-19 23:34:45.223746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.987 [2024-11-19 23:34:45.223788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.987 [2024-11-19 23:34:45.236497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.987 [2024-11-19 23:34:45.236524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.987 [2024-11-19 23:34:45.248750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.987 [2024-11-19 23:34:45.248777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.987 [2024-11-19 23:34:45.261392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.987 [2024-11-19 23:34:45.261418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.987 [2024-11-19 23:34:45.273919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.987 [2024-11-19 23:34:45.273946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.987 [2024-11-19 23:34:45.286328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.987 [2024-11-19 23:34:45.286356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.299149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.299176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.311480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.311521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.323808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.323834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.336094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.336122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.348852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.348878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.361294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.361322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.374433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.374461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.386784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.386812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.399302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.399339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.411979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.412005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.424213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.424241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.436364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.436390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.448499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.448526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.460844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.460871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.473427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.473472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.486033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.486086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.498672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.498700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 10185.20 IOPS, 79.57 MiB/s [2024-11-19T22:34:45.556Z] [2024-11-19 23:34:45.510740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.510766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.519581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.519608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 00:10:11.244 Latency(us) 00:10:11.244 [2024-11-19T22:34:45.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.244 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:11.244 Nvme1n1 : 5.01 10187.66 79.59 0.00 0.00 12547.09 5242.88 28544.57 00:10:11.244 [2024-11-19T22:34:45.556Z] =================================================================================================================== 00:10:11.244 [2024-11-19T22:34:45.556Z] Total : 10187.66 79.59 0.00 0.00 12547.09 5242.88 28544.57 00:10:11.244 [2024-11-19 23:34:45.525676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.525705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.533697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.533726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.541759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.541796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.244 [2024-11-19 23:34:45.549788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.244 [2024-11-19 23:34:45.549833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.557797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.557841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.565817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.565876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.573841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.573887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.581866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.581913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.589883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.589928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.597901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.597946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.605927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.605973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.613952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.613999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.621981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.622032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.629999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.630045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.638020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.638063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.646040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.646097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.654048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.654113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.662042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.662075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.670087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.670121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.678136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.678182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.686160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.686207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.694152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.694174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.702159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.702181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 [2024-11-19 23:34:45.710172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.503 [2024-11-19 23:34:45.710194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86542) - No such process 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 86542 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.503 delay0 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.503 23:34:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:11.503 [2024-11-19 23:34:45.798147] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:18.059 Initializing NVMe Controllers 00:10:18.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:18.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:18.059 Initialization complete. Launching workers. 00:10:18.059 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 139 00:10:18.059 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 426, failed to submit 33 00:10:18.059 success 277, unsuccessful 149, failed 0 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:18.059 rmmod nvme_tcp 00:10:18.059 rmmod nvme_fabrics 00:10:18.059 rmmod nvme_keyring 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 85184 ']' 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 85184 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 85184 ']' 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 85184 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85184 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85184' 00:10:18.059 killing process with pid 85184 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 85184 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 85184 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:18.059 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:18.318 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.318 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.318 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.318 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.318 23:34:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.223 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:20.223 00:10:20.223 real 0m27.876s 00:10:20.223 user 0m40.741s 00:10:20.223 sys 0m8.325s 00:10:20.223 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.223 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.223 ************************************ 00:10:20.223 END TEST nvmf_zcopy 00:10:20.223 ************************************ 00:10:20.223 23:34:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:20.223 23:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:20.223 23:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.223 23:34:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.223 ************************************ 00:10:20.223 START TEST nvmf_nmic 00:10:20.223 ************************************ 00:10:20.223 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:20.223 * Looking for test storage... 00:10:20.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.223 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:20.223 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:20.223 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:20.482 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:20.482 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.482 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.482 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.482 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:20.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.483 --rc genhtml_branch_coverage=1 00:10:20.483 --rc genhtml_function_coverage=1 00:10:20.483 --rc genhtml_legend=1 00:10:20.483 --rc geninfo_all_blocks=1 00:10:20.483 --rc geninfo_unexecuted_blocks=1 00:10:20.483 00:10:20.483 ' 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:20.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.483 --rc genhtml_branch_coverage=1 00:10:20.483 --rc genhtml_function_coverage=1 00:10:20.483 --rc genhtml_legend=1 00:10:20.483 --rc geninfo_all_blocks=1 00:10:20.483 --rc geninfo_unexecuted_blocks=1 00:10:20.483 00:10:20.483 ' 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:20.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.483 --rc genhtml_branch_coverage=1 00:10:20.483 --rc genhtml_function_coverage=1 00:10:20.483 --rc genhtml_legend=1 00:10:20.483 --rc geninfo_all_blocks=1 00:10:20.483 --rc geninfo_unexecuted_blocks=1 00:10:20.483 00:10:20.483 ' 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:20.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.483 --rc genhtml_branch_coverage=1 00:10:20.483 --rc genhtml_function_coverage=1 00:10:20.483 --rc genhtml_legend=1 00:10:20.483 --rc geninfo_all_blocks=1 00:10:20.483 --rc geninfo_unexecuted_blocks=1 00:10:20.483 00:10:20.483 ' 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:20.483 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.484 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.484 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.484 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:20.484 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:20.484 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:20.484 23:34:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.387 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.387 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.387 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:22.388 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:22.388 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:22.388 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:22.388 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.388 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:22.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:10:22.647 00:10:22.647 --- 10.0.0.2 ping statistics --- 00:10:22.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.647 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:10:22.647 00:10:22.647 --- 10.0.0.1 ping statistics --- 00:10:22.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.647 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=89842 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 89842 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 89842 ']' 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.647 23:34:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.647 [2024-11-19 23:34:56.900927] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:10:22.647 [2024-11-19 23:34:56.901000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.906 [2024-11-19 23:34:56.981054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.906 [2024-11-19 23:34:57.031548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.906 [2024-11-19 23:34:57.031615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.906 [2024-11-19 23:34:57.031644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.906 [2024-11-19 23:34:57.031655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.906 [2024-11-19 23:34:57.031665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.906 [2024-11-19 23:34:57.033240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.906 [2024-11-19 23:34:57.033302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.906 [2024-11-19 23:34:57.033368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.906 [2024-11-19 23:34:57.033371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.906 [2024-11-19 23:34:57.187014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.906 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.165 Malloc0 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.165 [2024-11-19 23:34:57.256660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:23.165 test case1: single bdev can't be used in multiple subsystems 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.165 [2024-11-19 23:34:57.280435] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:23.165 [2024-11-19 23:34:57.280465] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:23.165 [2024-11-19 23:34:57.280479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.165 request: 00:10:23.165 { 00:10:23.165 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:23.165 "namespace": { 00:10:23.165 "bdev_name": "Malloc0", 00:10:23.165 "no_auto_visible": false 00:10:23.165 }, 00:10:23.165 "method": "nvmf_subsystem_add_ns", 00:10:23.165 "req_id": 1 00:10:23.165 } 00:10:23.165 Got JSON-RPC error response 00:10:23.165 response: 00:10:23.165 { 00:10:23.165 "code": -32602, 00:10:23.165 "message": "Invalid parameters" 00:10:23.165 } 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:23.165 Adding namespace failed - expected result. 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:23.165 test case2: host connect to nvmf target in multiple paths 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.165 [2024-11-19 23:34:57.288556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.165 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.731 23:34:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:24.664 23:34:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:24.664 23:34:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:24.664 23:34:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.664 23:34:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:24.664 23:34:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:26.561 23:35:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:26.561 23:35:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:26.561 23:35:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.561 23:35:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:26.561 23:35:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.561 23:35:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:26.561 23:35:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:26.561 [global] 00:10:26.561 thread=1 00:10:26.561 invalidate=1 00:10:26.561 rw=write 00:10:26.561 time_based=1 00:10:26.561 runtime=1 00:10:26.561 ioengine=libaio 00:10:26.561 direct=1 00:10:26.561 bs=4096 00:10:26.561 iodepth=1 00:10:26.561 norandommap=0 00:10:26.561 numjobs=1 00:10:26.561 00:10:26.561 verify_dump=1 00:10:26.561 verify_backlog=512 00:10:26.561 verify_state_save=0 00:10:26.561 do_verify=1 00:10:26.561 verify=crc32c-intel 00:10:26.561 [job0] 00:10:26.561 filename=/dev/nvme0n1 00:10:26.561 Could not set queue depth (nvme0n1) 00:10:26.819 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.819 fio-3.35 00:10:26.819 Starting 1 thread 00:10:27.751 00:10:27.751 job0: (groupid=0, jobs=1): err= 0: pid=90459: Tue Nov 19 23:35:02 2024 00:10:27.751 read: IOPS=517, BW=2070KiB/s (2120kB/s)(2116KiB/1022msec) 00:10:27.751 slat (nsec): min=5123, max=46496, avg=8384.69, stdev=5457.02 00:10:27.751 clat (usec): min=173, max=42094, avg=1546.59, stdev=7373.73 00:10:27.751 lat (usec): min=181, max=42109, avg=1554.97, stdev=7376.43 00:10:27.751 clat percentiles (usec): 00:10:27.751 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 194], 00:10:27.751 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:10:27.751 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 233], 00:10:27.751 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:27.751 | 99.99th=[42206] 00:10:27.751 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:10:27.751 slat (usec): min=6, max=27921, avg=40.23, stdev=872.15 00:10:27.751 clat (usec): min=125, max=340, avg=149.89, stdev=18.58 00:10:27.751 lat (usec): min=133, max=28119, avg=190.11, stdev=873.94 00:10:27.751 clat percentiles (usec): 00:10:27.751 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:10:27.751 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:10:27.751 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 178], 00:10:27.751 | 99.00th=[ 231], 99.50th=[ 258], 99.90th=[ 306], 99.95th=[ 343], 00:10:27.751 | 99.99th=[ 343] 00:10:27.751 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:27.751 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:27.751 lat (usec) : 250=97.94%, 500=0.97% 00:10:27.751 lat (msec) : 50=1.09% 00:10:27.751 cpu : usr=0.78%, sys=1.76%, ctx=1555, majf=0, minf=1 00:10:27.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.752 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.752 00:10:27.752 Run status group 0 (all jobs): 00:10:27.752 READ: bw=2070KiB/s (2120kB/s), 2070KiB/s-2070KiB/s (2120kB/s-2120kB/s), io=2116KiB (2167kB), run=1022-1022msec 00:10:27.752 WRITE: bw=4008KiB/s (4104kB/s), 4008KiB/s-4008KiB/s (4104kB/s-4104kB/s), io=4096KiB (4194kB), run=1022-1022msec 00:10:27.752 00:10:27.752 Disk stats (read/write): 00:10:27.752 nvme0n1: ios=552/1024, merge=0/0, ticks=1676/150, in_queue=1826, util=98.60% 00:10:28.009 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:28.009 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.009 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:28.009 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:28.009 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.009 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:28.009 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.009 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.010 rmmod nvme_tcp 00:10:28.010 rmmod nvme_fabrics 00:10:28.010 rmmod nvme_keyring 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 89842 ']' 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 89842 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 89842 ']' 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 89842 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.010 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89842 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89842' 00:10:28.268 killing process with pid 89842 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 89842 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 89842 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.268 23:35:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:30.801 00:10:30.801 real 0m10.148s 00:10:30.801 user 0m23.054s 00:10:30.801 sys 0m2.418s 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.801 ************************************ 00:10:30.801 END TEST nvmf_nmic 00:10:30.801 ************************************ 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.801 ************************************ 00:10:30.801 START TEST nvmf_fio_target 00:10:30.801 ************************************ 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:30.801 * Looking for test storage... 00:10:30.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.801 --rc genhtml_branch_coverage=1 00:10:30.801 --rc genhtml_function_coverage=1 00:10:30.801 --rc genhtml_legend=1 00:10:30.801 --rc geninfo_all_blocks=1 00:10:30.801 --rc geninfo_unexecuted_blocks=1 00:10:30.801 00:10:30.801 ' 00:10:30.801 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.801 --rc genhtml_branch_coverage=1 00:10:30.801 --rc genhtml_function_coverage=1 00:10:30.801 --rc genhtml_legend=1 00:10:30.801 --rc geninfo_all_blocks=1 00:10:30.802 --rc geninfo_unexecuted_blocks=1 00:10:30.802 00:10:30.802 ' 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.802 --rc genhtml_branch_coverage=1 00:10:30.802 --rc genhtml_function_coverage=1 00:10:30.802 --rc genhtml_legend=1 00:10:30.802 --rc geninfo_all_blocks=1 00:10:30.802 --rc geninfo_unexecuted_blocks=1 00:10:30.802 00:10:30.802 ' 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.802 --rc genhtml_branch_coverage=1 00:10:30.802 --rc genhtml_function_coverage=1 00:10:30.802 --rc genhtml_legend=1 00:10:30.802 --rc geninfo_all_blocks=1 00:10:30.802 --rc geninfo_unexecuted_blocks=1 00:10:30.802 00:10:30.802 ' 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.802 23:35:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:32.756 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:32.756 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:32.756 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:32.756 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.756 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:32.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:10:32.757 00:10:32.757 --- 10.0.0.2 ping statistics --- 00:10:32.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.757 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:10:32.757 00:10:32.757 --- 10.0.0.1 ping statistics --- 00:10:32.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.757 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=92551 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 92551 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 92551 ']' 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.757 23:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.757 [2024-11-19 23:35:07.028443] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:10:32.757 [2024-11-19 23:35:07.028518] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.015 [2024-11-19 23:35:07.112886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.015 [2024-11-19 23:35:07.163682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.015 [2024-11-19 23:35:07.163748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.015 [2024-11-19 23:35:07.163764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.015 [2024-11-19 23:35:07.163797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.015 [2024-11-19 23:35:07.163810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.015 [2024-11-19 23:35:07.165577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.015 [2024-11-19 23:35:07.165633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.015 [2024-11-19 23:35:07.165690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.015 [2024-11-19 23:35:07.165693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.015 23:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.015 23:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:33.015 23:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.015 23:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.015 23:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.015 23:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.015 23:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:33.272 [2024-11-19 23:35:07.552235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.272 23:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.838 23:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:33.838 23:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.095 23:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:34.095 23:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.353 23:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:34.353 23:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.610 23:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:34.611 23:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:34.868 23:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:35.125 23:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:35.125 23:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:35.382 23:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:35.382 23:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:35.640 23:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:35.640 23:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:35.897 23:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:36.155 23:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:36.155 23:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:36.412 23:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:36.412 23:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:36.669 23:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.927 [2024-11-19 23:35:11.195116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.927 23:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:37.183 23:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:37.748 23:35:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.313 23:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:38.313 23:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:38.313 23:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.313 23:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:38.313 23:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:38.313 23:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:40.210 23:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:40.210 23:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:40.210 23:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.210 23:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:40.210 23:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.210 23:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:40.210 23:35:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:40.210 [global] 00:10:40.210 thread=1 00:10:40.210 invalidate=1 00:10:40.210 rw=write 00:10:40.210 time_based=1 00:10:40.210 runtime=1 00:10:40.210 ioengine=libaio 00:10:40.210 direct=1 00:10:40.210 bs=4096 00:10:40.210 iodepth=1 00:10:40.210 norandommap=0 00:10:40.210 numjobs=1 00:10:40.210 00:10:40.210 verify_dump=1 00:10:40.210 verify_backlog=512 00:10:40.210 verify_state_save=0 00:10:40.210 do_verify=1 00:10:40.210 verify=crc32c-intel 00:10:40.210 [job0] 00:10:40.210 filename=/dev/nvme0n1 00:10:40.210 [job1] 00:10:40.210 filename=/dev/nvme0n2 00:10:40.210 [job2] 00:10:40.210 filename=/dev/nvme0n3 00:10:40.210 [job3] 00:10:40.210 filename=/dev/nvme0n4 00:10:40.468 Could not set queue depth (nvme0n1) 00:10:40.468 Could not set queue depth (nvme0n2) 00:10:40.468 Could not set queue depth (nvme0n3) 00:10:40.468 Could not set queue depth (nvme0n4) 00:10:40.468 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.468 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.468 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.468 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.468 fio-3.35 00:10:40.468 Starting 4 threads 00:10:41.841 00:10:41.841 job0: (groupid=0, jobs=1): err= 0: pid=93632: Tue Nov 19 23:35:15 2024 00:10:41.841 read: IOPS=20, BW=82.3KiB/s (84.2kB/s)(84.0KiB/1021msec) 00:10:41.841 slat (nsec): min=11179, max=36013, avg=21335.52, stdev=8445.62 00:10:41.841 clat (usec): min=40941, max=44026, avg=41734.95, stdev=704.87 00:10:41.841 lat (usec): min=40975, max=44043, avg=41756.29, stdev=704.07 00:10:41.841 clat percentiles (usec): 00:10:41.841 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:41.841 | 30.00th=[41157], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:41.841 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:41.841 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:41.841 | 99.99th=[43779] 00:10:41.841 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:10:41.841 slat (nsec): min=7428, max=51792, avg=13333.97, stdev=6516.05 00:10:41.841 clat (usec): min=188, max=461, avg=263.61, stdev=42.79 00:10:41.841 lat (usec): min=202, max=471, avg=276.94, stdev=42.42 00:10:41.841 clat percentiles (usec): 00:10:41.841 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 221], 20.00th=[ 231], 00:10:41.841 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 269], 00:10:41.841 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 318], 95.00th=[ 347], 00:10:41.841 | 99.00th=[ 400], 99.50th=[ 404], 99.90th=[ 461], 99.95th=[ 461], 00:10:41.841 | 99.99th=[ 461] 00:10:41.841 bw ( KiB/s): min= 4096, max= 4096, per=26.25%, avg=4096.00, stdev= 0.00, samples=1 00:10:41.841 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:41.841 lat (usec) : 250=42.21%, 500=53.85% 00:10:41.841 lat (msec) : 50=3.94% 00:10:41.841 cpu : usr=0.59%, sys=0.78%, ctx=533, majf=0, minf=1 00:10:41.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.842 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.842 job1: (groupid=0, jobs=1): err= 0: pid=93633: Tue Nov 19 23:35:15 2024 00:10:41.842 read: IOPS=1192, BW=4771KiB/s (4886kB/s)(4876KiB/1022msec) 00:10:41.842 slat (nsec): min=4810, max=72635, avg=12404.85, stdev=8821.45 00:10:41.842 clat (usec): min=174, max=41241, avg=603.37, stdev=3858.65 00:10:41.842 lat (usec): min=181, max=41253, avg=615.78, stdev=3859.82 00:10:41.842 clat percentiles (usec): 00:10:41.842 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:10:41.842 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:10:41.842 | 70.00th=[ 223], 80.00th=[ 265], 90.00th=[ 297], 95.00th=[ 322], 00:10:41.842 | 99.00th=[ 5473], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:41.842 | 99.99th=[41157] 00:10:41.842 write: IOPS=1502, BW=6012KiB/s (6156kB/s)(6144KiB/1022msec); 0 zone resets 00:10:41.842 slat (nsec): min=6119, max=46338, avg=11618.47, stdev=5686.93 00:10:41.842 clat (usec): min=122, max=280, avg=158.70, stdev=22.62 00:10:41.842 lat (usec): min=130, max=293, avg=170.31, stdev=23.07 00:10:41.842 clat percentiles (usec): 00:10:41.842 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 141], 00:10:41.842 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 163], 00:10:41.842 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:10:41.842 | 99.00th=[ 251], 99.50th=[ 273], 99.90th=[ 277], 99.95th=[ 281], 00:10:41.842 | 99.99th=[ 281] 00:10:41.842 bw ( KiB/s): min= 4096, max= 8192, per=39.37%, avg=6144.00, stdev=2896.31, samples=2 00:10:41.842 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:41.842 lat (usec) : 250=88.68%, 500=10.85% 00:10:41.842 lat (msec) : 10=0.07%, 50=0.40% 00:10:41.842 cpu : usr=2.45%, sys=2.64%, ctx=2757, majf=0, minf=1 00:10:41.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.842 issued rwts: total=1219,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.842 job2: (groupid=0, jobs=1): err= 0: pid=93634: Tue Nov 19 23:35:15 2024 00:10:41.842 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:41.842 slat (nsec): min=4691, max=66298, avg=16879.05, stdev=8955.81 00:10:41.842 clat (usec): min=186, max=41306, avg=700.54, stdev=4009.61 00:10:41.842 lat (usec): min=198, max=41312, avg=717.42, stdev=4009.69 00:10:41.842 clat percentiles (usec): 00:10:41.842 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 219], 00:10:41.842 | 30.00th=[ 241], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 302], 00:10:41.842 | 70.00th=[ 322], 80.00th=[ 392], 90.00th=[ 457], 95.00th=[ 494], 00:10:41.842 | 99.00th=[ 652], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:41.842 | 99.99th=[41157] 00:10:41.842 write: IOPS=1425, BW=5702KiB/s (5839kB/s)(5708KiB/1001msec); 0 zone resets 00:10:41.842 slat (nsec): min=5513, max=39234, avg=11387.83, stdev=5084.58 00:10:41.842 clat (usec): min=121, max=376, avg=168.10, stdev=35.04 00:10:41.842 lat (usec): min=127, max=383, avg=179.49, stdev=35.56 00:10:41.842 clat percentiles (usec): 00:10:41.842 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 145], 00:10:41.842 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 163], 00:10:41.842 | 70.00th=[ 176], 80.00th=[ 190], 90.00th=[ 212], 95.00th=[ 237], 00:10:41.842 | 99.00th=[ 302], 99.50th=[ 326], 99.90th=[ 375], 99.95th=[ 375], 00:10:41.842 | 99.99th=[ 375] 00:10:41.842 bw ( KiB/s): min= 5336, max= 5336, per=34.19%, avg=5336.00, stdev= 0.00, samples=1 00:10:41.842 iops : min= 1334, max= 1334, avg=1334.00, stdev= 0.00, samples=1 00:10:41.842 lat (usec) : 250=71.03%, 500=27.34%, 750=1.22% 00:10:41.842 lat (msec) : 50=0.41% 00:10:41.842 cpu : usr=1.70%, sys=3.60%, ctx=2453, majf=0, minf=1 00:10:41.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.842 issued rwts: total=1024,1427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.842 job3: (groupid=0, jobs=1): err= 0: pid=93635: Tue Nov 19 23:35:15 2024 00:10:41.842 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:10:41.842 slat (nsec): min=8583, max=34016, avg=22721.00, stdev=9094.63 00:10:41.842 clat (usec): min=40898, max=41210, avg=40978.12, stdev=64.36 00:10:41.842 lat (usec): min=40932, max=41219, avg=41000.84, stdev=59.29 00:10:41.842 clat percentiles (usec): 00:10:41.842 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:41.842 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:41.842 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:41.842 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:41.842 | 99.99th=[41157] 00:10:41.842 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:10:41.842 slat (nsec): min=6271, max=52860, avg=15518.83, stdev=7629.61 00:10:41.842 clat (usec): min=139, max=1209, avg=268.82, stdev=82.80 00:10:41.842 lat (usec): min=147, max=1219, avg=284.34, stdev=81.32 00:10:41.842 clat percentiles (usec): 00:10:41.842 | 1.00th=[ 147], 5.00th=[ 174], 10.00th=[ 188], 20.00th=[ 202], 00:10:41.842 | 30.00th=[ 221], 40.00th=[ 237], 50.00th=[ 255], 60.00th=[ 277], 00:10:41.842 | 70.00th=[ 293], 80.00th=[ 326], 90.00th=[ 383], 95.00th=[ 404], 00:10:41.842 | 99.00th=[ 445], 99.50th=[ 461], 99.90th=[ 1205], 99.95th=[ 1205], 00:10:41.842 | 99.99th=[ 1205] 00:10:41.842 bw ( KiB/s): min= 4096, max= 4096, per=26.25%, avg=4096.00, stdev= 0.00, samples=1 00:10:41.842 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:41.842 lat (usec) : 250=45.78%, 500=50.09% 00:10:41.842 lat (msec) : 2=0.19%, 50=3.94% 00:10:41.842 cpu : usr=0.30%, sys=0.89%, ctx=533, majf=0, minf=2 00:10:41.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.842 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.842 00:10:41.842 Run status group 0 (all jobs): 00:10:41.842 READ: bw=8943KiB/s (9158kB/s), 82.3KiB/s-4771KiB/s (84.2kB/s-4886kB/s), io=9140KiB (9359kB), run=1001-1022msec 00:10:41.842 WRITE: bw=15.2MiB/s (16.0MB/s), 2006KiB/s-6012KiB/s (2054kB/s-6156kB/s), io=15.6MiB (16.3MB), run=1001-1022msec 00:10:41.842 00:10:41.842 Disk stats (read/write): 00:10:41.842 nvme0n1: ios=66/512, merge=0/0, ticks=687/126, in_queue=813, util=86.87% 00:10:41.842 nvme0n2: ios=1048/1240, merge=0/0, ticks=1578/181, in_queue=1759, util=98.07% 00:10:41.842 nvme0n3: ios=920/1024, merge=0/0, ticks=601/177, in_queue=778, util=89.02% 00:10:41.842 nvme0n4: ios=17/512, merge=0/0, ticks=698/133, in_queue=831, util=89.67% 00:10:41.842 23:35:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:41.842 [global] 00:10:41.842 thread=1 00:10:41.842 invalidate=1 00:10:41.842 rw=randwrite 00:10:41.842 time_based=1 00:10:41.842 runtime=1 00:10:41.842 ioengine=libaio 00:10:41.842 direct=1 00:10:41.842 bs=4096 00:10:41.842 iodepth=1 00:10:41.842 norandommap=0 00:10:41.842 numjobs=1 00:10:41.842 00:10:41.842 verify_dump=1 00:10:41.842 verify_backlog=512 00:10:41.842 verify_state_save=0 00:10:41.842 do_verify=1 00:10:41.842 verify=crc32c-intel 00:10:41.842 [job0] 00:10:41.842 filename=/dev/nvme0n1 00:10:41.842 [job1] 00:10:41.842 filename=/dev/nvme0n2 00:10:41.842 [job2] 00:10:41.842 filename=/dev/nvme0n3 00:10:41.842 [job3] 00:10:41.842 filename=/dev/nvme0n4 00:10:41.842 Could not set queue depth (nvme0n1) 00:10:41.842 Could not set queue depth (nvme0n2) 00:10:41.842 Could not set queue depth (nvme0n3) 00:10:41.842 Could not set queue depth (nvme0n4) 00:10:42.100 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.100 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.100 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.100 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.100 fio-3.35 00:10:42.100 Starting 4 threads 00:10:43.474 00:10:43.474 job0: (groupid=0, jobs=1): err= 0: pid=93984: Tue Nov 19 23:35:17 2024 00:10:43.474 read: IOPS=1876, BW=7504KiB/s (7685kB/s)(7512KiB/1001msec) 00:10:43.474 slat (nsec): min=5220, max=58413, avg=11164.94, stdev=6438.55 00:10:43.474 clat (usec): min=193, max=567, avg=274.05, stdev=70.41 00:10:43.474 lat (usec): min=200, max=587, avg=285.21, stdev=73.57 00:10:43.474 clat percentiles (usec): 00:10:43.474 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 227], 00:10:43.474 | 30.00th=[ 237], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:10:43.474 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 388], 95.00th=[ 445], 00:10:43.474 | 99.00th=[ 537], 99.50th=[ 537], 99.90th=[ 553], 99.95th=[ 570], 00:10:43.474 | 99.99th=[ 570] 00:10:43.474 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:43.474 slat (nsec): min=6626, max=71650, avg=16257.05, stdev=8836.91 00:10:43.474 clat (usec): min=128, max=1670, avg=202.67, stdev=73.66 00:10:43.474 lat (usec): min=136, max=1680, avg=218.93, stdev=77.44 00:10:43.474 clat percentiles (usec): 00:10:43.474 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 147], 00:10:43.474 | 30.00th=[ 157], 40.00th=[ 172], 50.00th=[ 186], 60.00th=[ 198], 00:10:43.474 | 70.00th=[ 221], 80.00th=[ 241], 90.00th=[ 289], 95.00th=[ 351], 00:10:43.474 | 99.00th=[ 433], 99.50th=[ 445], 99.90th=[ 523], 99.95th=[ 553], 00:10:43.474 | 99.99th=[ 1663] 00:10:43.474 bw ( KiB/s): min= 8192, max= 8192, per=42.36%, avg=8192.00, stdev= 0.00, samples=1 00:10:43.474 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:43.474 lat (usec) : 250=63.02%, 500=35.69%, 750=1.27% 00:10:43.474 lat (msec) : 2=0.03% 00:10:43.474 cpu : usr=4.00%, sys=7.30%, ctx=3926, majf=0, minf=1 00:10:43.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.474 issued rwts: total=1878,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.474 job1: (groupid=0, jobs=1): err= 0: pid=93985: Tue Nov 19 23:35:17 2024 00:10:43.474 read: IOPS=1368, BW=5475KiB/s (5606kB/s)(5480KiB/1001msec) 00:10:43.474 slat (nsec): min=5079, max=64546, avg=14002.93, stdev=6660.20 00:10:43.474 clat (usec): min=180, max=41062, avg=496.75, stdev=3105.86 00:10:43.474 lat (usec): min=186, max=41079, avg=510.75, stdev=3105.94 00:10:43.474 clat percentiles (usec): 00:10:43.474 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:10:43.474 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 233], 60.00th=[ 245], 00:10:43.474 | 70.00th=[ 273], 80.00th=[ 310], 90.00th=[ 351], 95.00th=[ 400], 00:10:43.474 | 99.00th=[ 570], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:43.474 | 99.99th=[41157] 00:10:43.474 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:43.474 slat (nsec): min=6754, max=62526, avg=13409.15, stdev=5386.25 00:10:43.474 clat (usec): min=128, max=415, avg=174.45, stdev=28.53 00:10:43.474 lat (usec): min=136, max=430, avg=187.86, stdev=30.33 00:10:43.474 clat percentiles (usec): 00:10:43.474 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:10:43.474 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 174], 60.00th=[ 184], 00:10:43.474 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 210], 00:10:43.474 | 99.00th=[ 285], 99.50th=[ 318], 99.90th=[ 379], 99.95th=[ 416], 00:10:43.474 | 99.99th=[ 416] 00:10:43.474 bw ( KiB/s): min= 9688, max= 9688, per=50.09%, avg=9688.00, stdev= 0.00, samples=1 00:10:43.474 iops : min= 2422, max= 2422, avg=2422.00, stdev= 0.00, samples=1 00:10:43.474 lat (usec) : 250=81.42%, 500=17.69%, 750=0.62% 00:10:43.474 lat (msec) : 50=0.28% 00:10:43.474 cpu : usr=2.30%, sys=4.10%, ctx=2907, majf=0, minf=1 00:10:43.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.474 issued rwts: total=1370,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.474 job2: (groupid=0, jobs=1): err= 0: pid=93986: Tue Nov 19 23:35:17 2024 00:10:43.474 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:43.474 slat (nsec): min=6270, max=62988, avg=14892.15, stdev=7285.92 00:10:43.474 clat (usec): min=207, max=41039, avg=1594.31, stdev=7162.67 00:10:43.474 lat (usec): min=222, max=41055, avg=1609.20, stdev=7163.80 00:10:43.474 clat percentiles (usec): 00:10:43.474 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:10:43.474 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 255], 00:10:43.474 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 338], 95.00th=[ 482], 00:10:43.474 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:43.474 | 99.99th=[41157] 00:10:43.474 write: IOPS=864, BW=3457KiB/s (3540kB/s)(3460KiB/1001msec); 0 zone resets 00:10:43.474 slat (nsec): min=8044, max=47805, avg=14005.88, stdev=5328.32 00:10:43.474 clat (usec): min=149, max=297, avg=183.32, stdev=16.05 00:10:43.474 lat (usec): min=158, max=306, avg=197.32, stdev=17.64 00:10:43.474 clat percentiles (usec): 00:10:43.474 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:10:43.474 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:10:43.474 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 210], 00:10:43.474 | 99.00th=[ 225], 99.50th=[ 245], 99.90th=[ 297], 99.95th=[ 297], 00:10:43.474 | 99.99th=[ 297] 00:10:43.474 bw ( KiB/s): min= 4096, max= 4096, per=21.18%, avg=4096.00, stdev= 0.00, samples=1 00:10:43.474 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:43.474 lat (usec) : 250=82.86%, 500=15.61%, 750=0.22% 00:10:43.474 lat (msec) : 10=0.07%, 50=1.23% 00:10:43.474 cpu : usr=0.90%, sys=2.10%, ctx=1378, majf=0, minf=1 00:10:43.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.475 issued rwts: total=512,865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.475 job3: (groupid=0, jobs=1): err= 0: pid=93987: Tue Nov 19 23:35:17 2024 00:10:43.475 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:10:43.475 slat (nsec): min=14059, max=36459, avg=21503.41, stdev=8319.58 00:10:43.475 clat (usec): min=303, max=42001, avg=39283.91, stdev=8715.99 00:10:43.475 lat (usec): min=322, max=42020, avg=39305.41, stdev=8716.58 00:10:43.475 clat percentiles (usec): 00:10:43.475 | 1.00th=[ 306], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:43.475 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:43.475 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:43.475 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:43.475 | 99.99th=[42206] 00:10:43.475 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:10:43.475 slat (nsec): min=10043, max=73808, avg=25588.20, stdev=11389.79 00:10:43.475 clat (usec): min=196, max=440, avg=281.23, stdev=49.53 00:10:43.475 lat (usec): min=226, max=487, avg=306.81, stdev=48.32 00:10:43.475 clat percentiles (usec): 00:10:43.475 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 243], 00:10:43.475 | 30.00th=[ 247], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:10:43.475 | 70.00th=[ 297], 80.00th=[ 330], 90.00th=[ 359], 95.00th=[ 375], 00:10:43.475 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 441], 99.95th=[ 441], 00:10:43.475 | 99.99th=[ 441] 00:10:43.475 bw ( KiB/s): min= 4096, max= 4096, per=21.18%, avg=4096.00, stdev= 0.00, samples=1 00:10:43.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:43.475 lat (usec) : 250=31.65%, 500=64.42% 00:10:43.475 lat (msec) : 50=3.93% 00:10:43.475 cpu : usr=1.07%, sys=1.37%, ctx=535, majf=0, minf=1 00:10:43.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.475 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.475 00:10:43.475 Run status group 0 (all jobs): 00:10:43.475 READ: bw=14.4MiB/s (15.1MB/s), 85.8KiB/s-7504KiB/s (87.8kB/s-7685kB/s), io=14.8MiB (15.5MB), run=1001-1026msec 00:10:43.475 WRITE: bw=18.9MiB/s (19.8MB/s), 1996KiB/s-8184KiB/s (2044kB/s-8380kB/s), io=19.4MiB (20.3MB), run=1001-1026msec 00:10:43.475 00:10:43.475 Disk stats (read/write): 00:10:43.475 nvme0n1: ios=1586/1764, merge=0/0, ticks=442/327, in_queue=769, util=87.37% 00:10:43.475 nvme0n2: ios=1056/1536, merge=0/0, ticks=1502/260, in_queue=1762, util=98.27% 00:10:43.475 nvme0n3: ios=430/512, merge=0/0, ticks=1309/92, in_queue=1401, util=98.12% 00:10:43.475 nvme0n4: ios=74/512, merge=0/0, ticks=1304/138, in_queue=1442, util=98.22% 00:10:43.475 23:35:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:43.475 [global] 00:10:43.475 thread=1 00:10:43.475 invalidate=1 00:10:43.475 rw=write 00:10:43.475 time_based=1 00:10:43.475 runtime=1 00:10:43.475 ioengine=libaio 00:10:43.475 direct=1 00:10:43.475 bs=4096 00:10:43.475 iodepth=128 00:10:43.475 norandommap=0 00:10:43.475 numjobs=1 00:10:43.475 00:10:43.475 verify_dump=1 00:10:43.475 verify_backlog=512 00:10:43.475 verify_state_save=0 00:10:43.475 do_verify=1 00:10:43.475 verify=crc32c-intel 00:10:43.475 [job0] 00:10:43.475 filename=/dev/nvme0n1 00:10:43.475 [job1] 00:10:43.475 filename=/dev/nvme0n2 00:10:43.475 [job2] 00:10:43.475 filename=/dev/nvme0n3 00:10:43.475 [job3] 00:10:43.475 filename=/dev/nvme0n4 00:10:43.475 Could not set queue depth (nvme0n1) 00:10:43.475 Could not set queue depth (nvme0n2) 00:10:43.475 Could not set queue depth (nvme0n3) 00:10:43.475 Could not set queue depth (nvme0n4) 00:10:43.475 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.475 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.475 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.475 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.475 fio-3.35 00:10:43.475 Starting 4 threads 00:10:44.850 00:10:44.850 job0: (groupid=0, jobs=1): err= 0: pid=94214: Tue Nov 19 23:35:18 2024 00:10:44.850 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:10:44.850 slat (usec): min=3, max=11275, avg=145.44, stdev=900.72 00:10:44.850 clat (usec): min=5107, max=38808, avg=19095.18, stdev=6345.42 00:10:44.850 lat (usec): min=5117, max=44080, avg=19240.63, stdev=6436.59 00:10:44.850 clat percentiles (usec): 00:10:44.850 | 1.00th=[ 7373], 5.00th=[ 9896], 10.00th=[12125], 20.00th=[14222], 00:10:44.850 | 30.00th=[14353], 40.00th=[17695], 50.00th=[17957], 60.00th=[20841], 00:10:44.850 | 70.00th=[22938], 80.00th=[24249], 90.00th=[27657], 95.00th=[30016], 00:10:44.850 | 99.00th=[35914], 99.50th=[36963], 99.90th=[37487], 99.95th=[38536], 00:10:44.850 | 99.99th=[39060] 00:10:44.850 write: IOPS=2874, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1007msec); 0 zone resets 00:10:44.850 slat (usec): min=5, max=11726, avg=206.05, stdev=931.50 00:10:44.850 clat (usec): min=3615, max=71918, avg=26978.44, stdev=12165.87 00:10:44.850 lat (usec): min=3622, max=71939, avg=27184.49, stdev=12244.98 00:10:44.850 clat percentiles (usec): 00:10:44.850 | 1.00th=[ 5735], 5.00th=[12125], 10.00th=[15664], 20.00th=[20317], 00:10:44.850 | 30.00th=[22676], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:10:44.850 | 70.00th=[24773], 80.00th=[30802], 90.00th=[47973], 95.00th=[57410], 00:10:44.850 | 99.00th=[64226], 99.50th=[65274], 99.90th=[71828], 99.95th=[71828], 00:10:44.850 | 99.99th=[71828] 00:10:44.850 bw ( KiB/s): min= 9856, max=12288, per=16.67%, avg=11072.00, stdev=1719.68, samples=2 00:10:44.850 iops : min= 2464, max= 3072, avg=2768.00, stdev=429.92, samples=2 00:10:44.850 lat (msec) : 4=0.26%, 10=3.90%, 20=33.47%, 50=57.78%, 100=4.58% 00:10:44.850 cpu : usr=2.68%, sys=3.88%, ctx=326, majf=0, minf=1 00:10:44.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:44.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.850 issued rwts: total=2560,2895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.850 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.850 job1: (groupid=0, jobs=1): err= 0: pid=94215: Tue Nov 19 23:35:18 2024 00:10:44.850 read: IOPS=5396, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1004msec) 00:10:44.850 slat (usec): min=2, max=11092, avg=91.97, stdev=619.54 00:10:44.850 clat (usec): min=2350, max=22956, avg=11894.09, stdev=3001.32 00:10:44.850 lat (usec): min=4316, max=22972, avg=11986.06, stdev=3034.08 00:10:44.850 clat percentiles (usec): 00:10:44.850 | 1.00th=[ 5932], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10290], 00:10:44.850 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11338], 00:10:44.850 | 70.00th=[12256], 80.00th=[14091], 90.00th=[16450], 95.00th=[18482], 00:10:44.850 | 99.00th=[20579], 99.50th=[21103], 99.90th=[22414], 99.95th=[22938], 00:10:44.850 | 99.99th=[22938] 00:10:44.850 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:10:44.850 slat (usec): min=4, max=26790, avg=78.69, stdev=531.51 00:10:44.850 clat (usec): min=2585, max=29429, avg=10548.60, stdev=2151.11 00:10:44.850 lat (usec): min=2593, max=29463, avg=10627.30, stdev=2195.48 00:10:44.850 clat percentiles (usec): 00:10:44.850 | 1.00th=[ 4015], 5.00th=[ 5735], 10.00th=[ 7111], 20.00th=[ 9503], 00:10:44.850 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:10:44.850 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11863], 95.00th=[12387], 00:10:44.850 | 99.00th=[15664], 99.50th=[16712], 99.90th=[21103], 99.95th=[22938], 00:10:44.850 | 99.99th=[29492] 00:10:44.850 bw ( KiB/s): min=22512, max=22544, per=33.92%, avg=22528.00, stdev=22.63, samples=2 00:10:44.850 iops : min= 5628, max= 5636, avg=5632.00, stdev= 5.66, samples=2 00:10:44.850 lat (msec) : 4=0.52%, 10=20.44%, 20=77.84%, 50=1.20% 00:10:44.850 cpu : usr=7.88%, sys=11.17%, ctx=608, majf=0, minf=1 00:10:44.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:44.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.850 issued rwts: total=5418,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.850 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.850 job2: (groupid=0, jobs=1): err= 0: pid=94216: Tue Nov 19 23:35:18 2024 00:10:44.850 read: IOPS=4850, BW=18.9MiB/s (19.9MB/s)(19.0MiB/1002msec) 00:10:44.850 slat (usec): min=3, max=5803, avg=99.91, stdev=571.19 00:10:44.850 clat (usec): min=837, max=18676, avg=12778.30, stdev=1765.86 00:10:44.850 lat (usec): min=6310, max=19131, avg=12878.21, stdev=1816.79 00:10:44.850 clat percentiles (usec): 00:10:44.850 | 1.00th=[ 7242], 5.00th=[ 9634], 10.00th=[11076], 20.00th=[11994], 00:10:44.850 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:10:44.850 | 70.00th=[13173], 80.00th=[13829], 90.00th=[14877], 95.00th=[15926], 00:10:44.850 | 99.00th=[17695], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:10:44.850 | 99.99th=[18744] 00:10:44.850 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:44.850 slat (usec): min=3, max=6228, avg=90.26, stdev=458.09 00:10:44.850 clat (usec): min=5886, max=18530, avg=12649.45, stdev=1523.76 00:10:44.850 lat (usec): min=5900, max=18551, avg=12739.71, stdev=1556.29 00:10:44.850 clat percentiles (usec): 00:10:44.850 | 1.00th=[ 7439], 5.00th=[10290], 10.00th=[11207], 20.00th=[12125], 00:10:44.850 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:10:44.850 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[14877], 00:10:44.850 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:10:44.850 | 99.99th=[18482] 00:10:44.850 bw ( KiB/s): min=20480, max=20480, per=30.84%, avg=20480.00, stdev= 0.00, samples=2 00:10:44.850 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:44.850 lat (usec) : 1000=0.01% 00:10:44.850 lat (msec) : 10=5.81%, 20=94.18% 00:10:44.850 cpu : usr=5.99%, sys=11.59%, ctx=485, majf=0, minf=1 00:10:44.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:44.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.851 issued rwts: total=4860,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.851 job3: (groupid=0, jobs=1): err= 0: pid=94217: Tue Nov 19 23:35:18 2024 00:10:44.851 read: IOPS=2718, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1005msec) 00:10:44.851 slat (usec): min=3, max=15326, avg=165.86, stdev=994.88 00:10:44.851 clat (usec): min=3989, max=49913, avg=19626.51, stdev=6585.40 00:10:44.851 lat (usec): min=4009, max=49937, avg=19792.37, stdev=6675.80 00:10:44.851 clat percentiles (usec): 00:10:44.851 | 1.00th=[ 8225], 5.00th=[12780], 10.00th=[14091], 20.00th=[15270], 00:10:44.851 | 30.00th=[15664], 40.00th=[16319], 50.00th=[16909], 60.00th=[18220], 00:10:44.851 | 70.00th=[21627], 80.00th=[25035], 90.00th=[28967], 95.00th=[31327], 00:10:44.851 | 99.00th=[42730], 99.50th=[44303], 99.90th=[50070], 99.95th=[50070], 00:10:44.851 | 99.99th=[50070] 00:10:44.851 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:10:44.851 slat (usec): min=4, max=15574, avg=167.03, stdev=874.00 00:10:44.851 clat (usec): min=9768, max=61912, avg=23794.17, stdev=8255.12 00:10:44.851 lat (usec): min=9785, max=61978, avg=23961.20, stdev=8319.68 00:10:44.851 clat percentiles (usec): 00:10:44.851 | 1.00th=[12780], 5.00th=[14222], 10.00th=[14484], 20.00th=[18220], 00:10:44.851 | 30.00th=[20317], 40.00th=[22938], 50.00th=[23725], 60.00th=[23987], 00:10:44.851 | 70.00th=[24511], 80.00th=[24773], 90.00th=[30540], 95.00th=[43779], 00:10:44.851 | 99.00th=[53740], 99.50th=[55837], 99.90th=[61604], 99.95th=[62129], 00:10:44.851 | 99.99th=[62129] 00:10:44.851 bw ( KiB/s): min=11384, max=13192, per=18.50%, avg=12288.00, stdev=1278.45, samples=2 00:10:44.851 iops : min= 2846, max= 3298, avg=3072.00, stdev=319.61, samples=2 00:10:44.851 lat (msec) : 4=0.02%, 10=1.02%, 20=45.16%, 50=52.36%, 100=1.45% 00:10:44.851 cpu : usr=4.08%, sys=7.27%, ctx=292, majf=0, minf=1 00:10:44.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:44.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.851 issued rwts: total=2732,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.851 00:10:44.851 Run status group 0 (all jobs): 00:10:44.851 READ: bw=60.4MiB/s (63.3MB/s), 9.93MiB/s-21.1MiB/s (10.4MB/s-22.1MB/s), io=60.8MiB (63.8MB), run=1002-1007msec 00:10:44.851 WRITE: bw=64.9MiB/s (68.0MB/s), 11.2MiB/s-21.9MiB/s (11.8MB/s-23.0MB/s), io=65.3MiB (68.5MB), run=1002-1007msec 00:10:44.851 00:10:44.851 Disk stats (read/write): 00:10:44.851 nvme0n1: ios=2510/2560, merge=0/0, ticks=25952/31276, in_queue=57228, util=98.30% 00:10:44.851 nvme0n2: ios=4659/4710, merge=0/0, ticks=52860/47901, in_queue=100761, util=98.38% 00:10:44.851 nvme0n3: ios=4096/4431, merge=0/0, ticks=25467/25147, in_queue=50614, util=89.07% 00:10:44.851 nvme0n4: ios=2097/2560, merge=0/0, ticks=22893/30304, in_queue=53197, util=96.75% 00:10:44.851 23:35:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:44.851 [global] 00:10:44.851 thread=1 00:10:44.851 invalidate=1 00:10:44.851 rw=randwrite 00:10:44.851 time_based=1 00:10:44.851 runtime=1 00:10:44.851 ioengine=libaio 00:10:44.851 direct=1 00:10:44.851 bs=4096 00:10:44.851 iodepth=128 00:10:44.851 norandommap=0 00:10:44.851 numjobs=1 00:10:44.851 00:10:44.851 verify_dump=1 00:10:44.851 verify_backlog=512 00:10:44.851 verify_state_save=0 00:10:44.851 do_verify=1 00:10:44.851 verify=crc32c-intel 00:10:44.851 [job0] 00:10:44.851 filename=/dev/nvme0n1 00:10:44.851 [job1] 00:10:44.851 filename=/dev/nvme0n2 00:10:44.851 [job2] 00:10:44.851 filename=/dev/nvme0n3 00:10:44.851 [job3] 00:10:44.851 filename=/dev/nvme0n4 00:10:44.851 Could not set queue depth (nvme0n1) 00:10:44.851 Could not set queue depth (nvme0n2) 00:10:44.851 Could not set queue depth (nvme0n3) 00:10:44.851 Could not set queue depth (nvme0n4) 00:10:44.851 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.851 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.851 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.851 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.851 fio-3.35 00:10:44.851 Starting 4 threads 00:10:46.224 00:10:46.224 job0: (groupid=0, jobs=1): err= 0: pid=94441: Tue Nov 19 23:35:20 2024 00:10:46.224 read: IOPS=2158, BW=8636KiB/s (8843kB/s)(8696KiB/1007msec) 00:10:46.224 slat (usec): min=2, max=17407, avg=185.02, stdev=1067.44 00:10:46.224 clat (usec): min=2303, max=66301, avg=21670.28, stdev=10185.22 00:10:46.224 lat (usec): min=9850, max=66306, avg=21855.30, stdev=10275.46 00:10:46.224 clat percentiles (usec): 00:10:46.224 | 1.00th=[11076], 5.00th=[12911], 10.00th=[13829], 20.00th=[14484], 00:10:46.224 | 30.00th=[15401], 40.00th=[15926], 50.00th=[18744], 60.00th=[20317], 00:10:46.224 | 70.00th=[21103], 80.00th=[27919], 90.00th=[34866], 95.00th=[39060], 00:10:46.224 | 99.00th=[63177], 99.50th=[64750], 99.90th=[66323], 99.95th=[66323], 00:10:46.224 | 99.99th=[66323] 00:10:46.224 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:10:46.224 slat (usec): min=3, max=11821, avg=227.72, stdev=956.33 00:10:46.224 clat (usec): min=8644, max=83899, avg=31368.98, stdev=17501.25 00:10:46.224 lat (usec): min=8658, max=83912, avg=31596.70, stdev=17569.02 00:10:46.224 clat percentiles (usec): 00:10:46.224 | 1.00th=[10683], 5.00th=[11469], 10.00th=[11731], 20.00th=[20841], 00:10:46.224 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24511], 00:10:46.224 | 70.00th=[34341], 80.00th=[46400], 90.00th=[60031], 95.00th=[67634], 00:10:46.224 | 99.00th=[82314], 99.50th=[83362], 99.90th=[84411], 99.95th=[84411], 00:10:46.224 | 99.99th=[84411] 00:10:46.224 bw ( KiB/s): min=10184, max=10280, per=15.74%, avg=10232.00, stdev=67.88, samples=2 00:10:46.224 iops : min= 2546, max= 2570, avg=2558.00, stdev=16.97, samples=2 00:10:46.224 lat (msec) : 4=0.02%, 10=0.63%, 20=34.64%, 50=53.82%, 100=10.88% 00:10:46.224 cpu : usr=2.58%, sys=3.38%, ctx=383, majf=0, minf=1 00:10:46.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:10:46.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.224 issued rwts: total=2174,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.224 job1: (groupid=0, jobs=1): err= 0: pid=94442: Tue Nov 19 23:35:20 2024 00:10:46.224 read: IOPS=5335, BW=20.8MiB/s (21.9MB/s)(21.0MiB/1007msec) 00:10:46.224 slat (usec): min=2, max=10406, avg=88.79, stdev=634.71 00:10:46.224 clat (usec): min=2730, max=25317, avg=11845.84, stdev=2746.27 00:10:46.224 lat (usec): min=3896, max=27772, avg=11934.62, stdev=2785.77 00:10:46.224 clat percentiles (usec): 00:10:46.224 | 1.00th=[ 5735], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[10421], 00:10:46.224 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:10:46.224 | 70.00th=[11731], 80.00th=[13435], 90.00th=[15795], 95.00th=[17695], 00:10:46.224 | 99.00th=[20579], 99.50th=[21365], 99.90th=[25297], 99.95th=[25297], 00:10:46.224 | 99.99th=[25297] 00:10:46.224 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:10:46.224 slat (usec): min=3, max=29274, avg=80.28, stdev=635.56 00:10:46.224 clat (usec): min=2440, max=49731, avg=11372.50, stdev=4052.87 00:10:46.224 lat (usec): min=2447, max=49752, avg=11452.78, stdev=4106.89 00:10:46.224 clat percentiles (usec): 00:10:46.225 | 1.00th=[ 3654], 5.00th=[ 5932], 10.00th=[ 8029], 20.00th=[10159], 00:10:46.225 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11600], 00:10:46.225 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12518], 95.00th=[16909], 00:10:46.225 | 99.00th=[30802], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:10:46.225 | 99.99th=[49546] 00:10:46.225 bw ( KiB/s): min=20768, max=24288, per=34.65%, avg=22528.00, stdev=2489.02, samples=2 00:10:46.225 iops : min= 5192, max= 6072, avg=5632.00, stdev=622.25, samples=2 00:10:46.225 lat (msec) : 4=0.95%, 10=15.54%, 20=80.83%, 50=2.69% 00:10:46.225 cpu : usr=4.47%, sys=11.33%, ctx=499, majf=0, minf=1 00:10:46.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:46.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.225 issued rwts: total=5373,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.225 job2: (groupid=0, jobs=1): err= 0: pid=94448: Tue Nov 19 23:35:20 2024 00:10:46.225 read: IOPS=3039, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:10:46.225 slat (usec): min=2, max=16636, avg=158.01, stdev=982.44 00:10:46.225 clat (usec): min=4538, max=56520, avg=17822.08, stdev=8068.24 00:10:46.225 lat (usec): min=6234, max=56562, avg=17980.09, stdev=8147.95 00:10:46.225 clat percentiles (usec): 00:10:46.225 | 1.00th=[ 7373], 5.00th=[11994], 10.00th=[13042], 20.00th=[13566], 00:10:46.225 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[15533], 00:10:46.225 | 70.00th=[17171], 80.00th=[19530], 90.00th=[29230], 95.00th=[36963], 00:10:46.225 | 99.00th=[49546], 99.50th=[51643], 99.90th=[56361], 99.95th=[56361], 00:10:46.225 | 99.99th=[56361] 00:10:46.225 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:10:46.225 slat (usec): min=3, max=19613, avg=153.62, stdev=653.44 00:10:46.225 clat (usec): min=3059, max=59575, avg=23835.39, stdev=10150.10 00:10:46.225 lat (usec): min=3066, max=59583, avg=23989.02, stdev=10228.70 00:10:46.225 clat percentiles (usec): 00:10:46.225 | 1.00th=[ 4621], 5.00th=[ 7242], 10.00th=[10421], 20.00th=[14484], 00:10:46.225 | 30.00th=[21890], 40.00th=[23462], 50.00th=[23987], 60.00th=[24249], 00:10:46.225 | 70.00th=[26346], 80.00th=[28181], 90.00th=[36963], 95.00th=[43254], 00:10:46.225 | 99.00th=[55837], 99.50th=[56361], 99.90th=[59507], 99.95th=[59507], 00:10:46.225 | 99.99th=[59507] 00:10:46.225 bw ( KiB/s): min=10968, max=13608, per=18.90%, avg=12288.00, stdev=1866.76, samples=2 00:10:46.225 iops : min= 2742, max= 3402, avg=3072.00, stdev=466.69, samples=2 00:10:46.225 lat (msec) : 4=0.20%, 10=5.41%, 20=47.03%, 50=45.88%, 100=1.48% 00:10:46.225 cpu : usr=2.58%, sys=7.15%, ctx=377, majf=0, minf=1 00:10:46.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:46.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.225 issued rwts: total=3064,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.225 job3: (groupid=0, jobs=1): err= 0: pid=94450: Tue Nov 19 23:35:20 2024 00:10:46.225 read: IOPS=4874, BW=19.0MiB/s (20.0MB/s)(19.1MiB/1004msec) 00:10:46.225 slat (usec): min=2, max=5979, avg=99.44, stdev=574.72 00:10:46.225 clat (usec): min=3341, max=18416, avg=12576.26, stdev=1842.49 00:10:46.225 lat (usec): min=3354, max=18667, avg=12675.70, stdev=1898.37 00:10:46.225 clat percentiles (usec): 00:10:46.225 | 1.00th=[ 8225], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[11994], 00:10:46.225 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:10:46.225 | 70.00th=[12780], 80.00th=[13173], 90.00th=[14877], 95.00th=[16188], 00:10:46.225 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:10:46.225 | 99.99th=[18482] 00:10:46.225 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:10:46.225 slat (usec): min=4, max=5985, avg=90.91, stdev=418.49 00:10:46.225 clat (usec): min=6819, max=18623, avg=12819.81, stdev=1484.88 00:10:46.225 lat (usec): min=6842, max=18680, avg=12910.72, stdev=1512.13 00:10:46.225 clat percentiles (usec): 00:10:46.225 | 1.00th=[ 7832], 5.00th=[10552], 10.00th=[11600], 20.00th=[12256], 00:10:46.225 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:46.225 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13435], 95.00th=[15926], 00:10:46.225 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:10:46.225 | 99.99th=[18744] 00:10:46.225 bw ( KiB/s): min=20480, max=20480, per=31.50%, avg=20480.00, stdev= 0.00, samples=2 00:10:46.225 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:46.225 lat (msec) : 4=0.31%, 10=5.87%, 20=93.82% 00:10:46.225 cpu : usr=5.68%, sys=10.07%, ctx=599, majf=0, minf=1 00:10:46.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:46.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.225 issued rwts: total=4894,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.225 00:10:46.225 Run status group 0 (all jobs): 00:10:46.225 READ: bw=60.1MiB/s (63.0MB/s), 8636KiB/s-20.8MiB/s (8843kB/s-21.9MB/s), io=60.6MiB (63.5MB), run=1004-1008msec 00:10:46.225 WRITE: bw=63.5MiB/s (66.6MB/s), 9.93MiB/s-21.8MiB/s (10.4MB/s-22.9MB/s), io=64.0MiB (67.1MB), run=1004-1008msec 00:10:46.225 00:10:46.225 Disk stats (read/write): 00:10:46.225 nvme0n1: ios=2050/2048, merge=0/0, ticks=18844/27517, in_queue=46361, util=96.49% 00:10:46.225 nvme0n2: ios=4596/4615, merge=0/0, ticks=51199/51444, in_queue=102643, util=87.01% 00:10:46.225 nvme0n3: ios=2586/2591, merge=0/0, ticks=44193/60473, in_queue=104666, util=90.94% 00:10:46.225 nvme0n4: ios=4142/4431, merge=0/0, ticks=25598/26215, in_queue=51813, util=92.97% 00:10:46.225 23:35:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:46.225 23:35:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=94588 00:10:46.225 23:35:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:46.225 23:35:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:46.225 [global] 00:10:46.225 thread=1 00:10:46.225 invalidate=1 00:10:46.225 rw=read 00:10:46.225 time_based=1 00:10:46.225 runtime=10 00:10:46.225 ioengine=libaio 00:10:46.225 direct=1 00:10:46.225 bs=4096 00:10:46.225 iodepth=1 00:10:46.225 norandommap=1 00:10:46.225 numjobs=1 00:10:46.225 00:10:46.225 [job0] 00:10:46.225 filename=/dev/nvme0n1 00:10:46.225 [job1] 00:10:46.225 filename=/dev/nvme0n2 00:10:46.225 [job2] 00:10:46.225 filename=/dev/nvme0n3 00:10:46.225 [job3] 00:10:46.225 filename=/dev/nvme0n4 00:10:46.225 Could not set queue depth (nvme0n1) 00:10:46.225 Could not set queue depth (nvme0n2) 00:10:46.225 Could not set queue depth (nvme0n3) 00:10:46.225 Could not set queue depth (nvme0n4) 00:10:46.482 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.482 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.482 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.482 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.482 fio-3.35 00:10:46.482 Starting 4 threads 00:10:49.762 23:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:49.762 23:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:49.762 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=454656, buflen=4096 00:10:49.762 fio: pid=94702, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:49.762 23:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.762 23:35:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:49.762 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=31817728, buflen=4096 00:10:49.762 fio: pid=94695, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:50.019 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9650176, buflen=4096 00:10:50.019 fio: pid=94681, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:50.019 23:35:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.019 23:35:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:50.277 23:35:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.278 23:35:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:50.278 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=39645184, buflen=4096 00:10:50.278 fio: pid=94682, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:50.536 00:10:50.536 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=94681: Tue Nov 19 23:35:24 2024 00:10:50.536 read: IOPS=677, BW=2708KiB/s (2773kB/s)(9424KiB/3480msec) 00:10:50.536 slat (usec): min=4, max=11707, avg=20.37, stdev=393.00 00:10:50.536 clat (usec): min=167, max=41078, avg=1444.55, stdev=6967.78 00:10:50.536 lat (usec): min=172, max=41084, avg=1464.93, stdev=6979.25 00:10:50.536 clat percentiles (usec): 00:10:50.536 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:10:50.536 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:10:50.536 | 70.00th=[ 229], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 285], 00:10:50.536 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:50.536 | 99.99th=[41157] 00:10:50.536 bw ( KiB/s): min= 96, max= 2968, per=2.79%, avg=585.33, stdev=1167.54, samples=6 00:10:50.536 iops : min= 24, max= 742, avg=146.33, stdev=291.89, samples=6 00:10:50.536 lat (usec) : 250=84.05%, 500=12.81%, 750=0.08% 00:10:50.536 lat (msec) : 50=3.01% 00:10:50.536 cpu : usr=0.17%, sys=0.49%, ctx=2362, majf=0, minf=2 00:10:50.536 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.536 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.536 issued rwts: total=2357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.536 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.536 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=94682: Tue Nov 19 23:35:24 2024 00:10:50.536 read: IOPS=2550, BW=9.96MiB/s (10.4MB/s)(37.8MiB/3796msec) 00:10:50.536 slat (usec): min=5, max=16404, avg=16.83, stdev=252.75 00:10:50.536 clat (usec): min=214, max=41544, avg=369.66, stdev=1241.57 00:10:50.536 lat (usec): min=220, max=57462, avg=386.49, stdev=1320.39 00:10:50.536 clat percentiles (usec): 00:10:50.536 | 1.00th=[ 241], 5.00th=[ 265], 10.00th=[ 281], 20.00th=[ 310], 00:10:50.536 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:10:50.536 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 383], 00:10:50.536 | 99.00th=[ 498], 99.50th=[ 562], 99.90th=[ 947], 99.95th=[41157], 00:10:50.536 | 99.99th=[41681] 00:10:50.536 bw ( KiB/s): min= 7725, max=11800, per=51.65%, avg=10839.57, stdev=1409.59, samples=7 00:10:50.536 iops : min= 1931, max= 2950, avg=2709.86, stdev=352.49, samples=7 00:10:50.536 lat (usec) : 250=2.36%, 500=96.68%, 750=0.85%, 1000=0.01% 00:10:50.536 lat (msec) : 50=0.09% 00:10:50.536 cpu : usr=1.79%, sys=5.03%, ctx=9686, majf=0, minf=2 00:10:50.536 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.536 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.536 issued rwts: total=9680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.536 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.536 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=94695: Tue Nov 19 23:35:24 2024 00:10:50.536 read: IOPS=2437, BW=9750KiB/s (9984kB/s)(30.3MiB/3187msec) 00:10:50.536 slat (usec): min=5, max=12385, avg=14.90, stdev=154.88 00:10:50.536 clat (usec): min=243, max=41602, avg=388.86, stdev=1137.62 00:10:50.536 lat (usec): min=250, max=53516, avg=403.76, stdev=1203.60 00:10:50.536 clat percentiles (usec): 00:10:50.536 | 1.00th=[ 293], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 322], 00:10:50.536 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:10:50.536 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 400], 95.00th=[ 545], 00:10:50.536 | 99.00th=[ 685], 99.50th=[ 709], 99.90th=[ 816], 99.95th=[41157], 00:10:50.536 | 99.99th=[41681] 00:10:50.536 bw ( KiB/s): min= 8864, max=11304, per=49.15%, avg=10313.33, stdev=862.81, samples=6 00:10:50.536 iops : min= 2216, max= 2826, avg=2578.33, stdev=215.70, samples=6 00:10:50.536 lat (usec) : 250=0.04%, 500=93.69%, 750=6.06%, 1000=0.10% 00:10:50.536 lat (msec) : 10=0.01%, 50=0.08% 00:10:50.536 cpu : usr=1.98%, sys=4.87%, ctx=7771, majf=0, minf=1 00:10:50.536 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.536 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.536 issued rwts: total=7769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.536 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.536 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=94702: Tue Nov 19 23:35:24 2024 00:10:50.536 read: IOPS=38, BW=153KiB/s (157kB/s)(444KiB/2903msec) 00:10:50.536 slat (nsec): min=7206, max=38360, avg=21641.18, stdev=9518.74 00:10:50.536 clat (usec): min=269, max=41144, avg=25856.00, stdev=19590.66 00:10:50.536 lat (usec): min=282, max=41152, avg=25877.70, stdev=19594.16 00:10:50.536 clat percentiles (usec): 00:10:50.536 | 1.00th=[ 273], 5.00th=[ 334], 10.00th=[ 359], 20.00th=[ 371], 00:10:50.536 | 30.00th=[ 408], 40.00th=[40633], 50.00th=[40633], 60.00th=[40633], 00:10:50.536 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:50.536 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:50.536 | 99.99th=[41157] 00:10:50.536 bw ( KiB/s): min= 112, max= 184, per=0.74%, avg=155.20, stdev=30.25, samples=5 00:10:50.536 iops : min= 28, max= 46, avg=38.80, stdev= 7.56, samples=5 00:10:50.536 lat (usec) : 500=35.71%, 750=0.89% 00:10:50.536 lat (msec) : 50=62.50% 00:10:50.536 cpu : usr=0.17%, sys=0.00%, ctx=115, majf=0, minf=1 00:10:50.536 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.536 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.536 issued rwts: total=112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.536 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.536 00:10:50.536 Run status group 0 (all jobs): 00:10:50.536 READ: bw=20.5MiB/s (21.5MB/s), 153KiB/s-9.96MiB/s (157kB/s-10.4MB/s), io=77.8MiB (81.6MB), run=2903-3796msec 00:10:50.536 00:10:50.536 Disk stats (read/write): 00:10:50.536 nvme0n1: ios=1982/0, merge=0/0, ticks=3984/0, in_queue=3984, util=98.88% 00:10:50.536 nvme0n2: ios=9705/0, merge=0/0, ticks=4119/0, in_queue=4119, util=98.10% 00:10:50.536 nvme0n3: ios=7751/0, merge=0/0, ticks=2861/0, in_queue=2861, util=96.23% 00:10:50.536 nvme0n4: ios=153/0, merge=0/0, ticks=3634/0, in_queue=3634, util=98.75% 00:10:50.536 23:35:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.536 23:35:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:50.795 23:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.795 23:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:51.361 23:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.361 23:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:51.361 23:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.361 23:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:51.619 23:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:51.619 23:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 94588 00:10:51.619 23:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:51.619 23:35:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.877 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.877 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:51.877 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:51.877 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.877 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:51.877 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.877 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:51.877 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:51.877 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:51.877 nvmf hotplug test: fio failed as expected 00:10:51.877 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.135 rmmod nvme_tcp 00:10:52.135 rmmod nvme_fabrics 00:10:52.135 rmmod nvme_keyring 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 92551 ']' 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 92551 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 92551 ']' 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 92551 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.135 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92551 00:10:52.394 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.394 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.394 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92551' 00:10:52.394 killing process with pid 92551 00:10:52.394 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 92551 00:10:52.394 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 92551 00:10:52.394 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.394 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.394 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.394 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:52.394 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:52.395 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.395 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.395 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.395 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.395 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.395 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.395 23:35:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.928 00:10:54.928 real 0m24.068s 00:10:54.928 user 1m24.611s 00:10:54.928 sys 0m7.270s 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.928 ************************************ 00:10:54.928 END TEST nvmf_fio_target 00:10:54.928 ************************************ 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.928 ************************************ 00:10:54.928 START TEST nvmf_bdevio 00:10:54.928 ************************************ 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:54.928 * Looking for test storage... 00:10:54.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:54.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.928 --rc genhtml_branch_coverage=1 00:10:54.928 --rc genhtml_function_coverage=1 00:10:54.928 --rc genhtml_legend=1 00:10:54.928 --rc geninfo_all_blocks=1 00:10:54.928 --rc geninfo_unexecuted_blocks=1 00:10:54.928 00:10:54.928 ' 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:54.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.928 --rc genhtml_branch_coverage=1 00:10:54.928 --rc genhtml_function_coverage=1 00:10:54.928 --rc genhtml_legend=1 00:10:54.928 --rc geninfo_all_blocks=1 00:10:54.928 --rc geninfo_unexecuted_blocks=1 00:10:54.928 00:10:54.928 ' 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:54.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.928 --rc genhtml_branch_coverage=1 00:10:54.928 --rc genhtml_function_coverage=1 00:10:54.928 --rc genhtml_legend=1 00:10:54.928 --rc geninfo_all_blocks=1 00:10:54.928 --rc geninfo_unexecuted_blocks=1 00:10:54.928 00:10:54.928 ' 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:54.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.928 --rc genhtml_branch_coverage=1 00:10:54.928 --rc genhtml_function_coverage=1 00:10:54.928 --rc genhtml_legend=1 00:10:54.928 --rc geninfo_all_blocks=1 00:10:54.928 --rc geninfo_unexecuted_blocks=1 00:10:54.928 00:10:54.928 ' 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:54.928 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.929 23:35:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.829 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.829 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.829 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.829 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.829 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.829 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.829 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.829 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.829 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.829 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:56.830 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:56.830 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:56.830 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:56.830 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.830 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:10:57.089 00:10:57.089 --- 10.0.0.2 ping statistics --- 00:10:57.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.089 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:10:57.089 00:10:57.089 --- 10.0.0.1 ping statistics --- 00:10:57.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.089 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=97439 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 97439 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 97439 ']' 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.089 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.089 [2024-11-19 23:35:31.242905] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:10:57.089 [2024-11-19 23:35:31.243003] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.089 [2024-11-19 23:35:31.316270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.089 [2024-11-19 23:35:31.366502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.089 [2024-11-19 23:35:31.366564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.089 [2024-11-19 23:35:31.366593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.089 [2024-11-19 23:35:31.366604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.089 [2024-11-19 23:35:31.366614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.089 [2024-11-19 23:35:31.368321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:57.089 [2024-11-19 23:35:31.368364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:57.089 [2024-11-19 23:35:31.368409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:57.089 [2024-11-19 23:35:31.368411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.348 [2024-11-19 23:35:31.521319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.348 Malloc0 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.348 [2024-11-19 23:35:31.588083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:57.348 { 00:10:57.348 "params": { 00:10:57.348 "name": "Nvme$subsystem", 00:10:57.348 "trtype": "$TEST_TRANSPORT", 00:10:57.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:57.348 "adrfam": "ipv4", 00:10:57.348 "trsvcid": "$NVMF_PORT", 00:10:57.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:57.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:57.348 "hdgst": ${hdgst:-false}, 00:10:57.348 "ddgst": ${ddgst:-false} 00:10:57.348 }, 00:10:57.348 "method": "bdev_nvme_attach_controller" 00:10:57.348 } 00:10:57.348 EOF 00:10:57.348 )") 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:57.348 23:35:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:57.348 "params": { 00:10:57.348 "name": "Nvme1", 00:10:57.348 "trtype": "tcp", 00:10:57.348 "traddr": "10.0.0.2", 00:10:57.348 "adrfam": "ipv4", 00:10:57.348 "trsvcid": "4420", 00:10:57.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:57.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:57.348 "hdgst": false, 00:10:57.348 "ddgst": false 00:10:57.348 }, 00:10:57.348 "method": "bdev_nvme_attach_controller" 00:10:57.348 }' 00:10:57.348 [2024-11-19 23:35:31.638318] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:10:57.348 [2024-11-19 23:35:31.638411] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97468 ] 00:10:57.606 [2024-11-19 23:35:31.707917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:57.606 [2024-11-19 23:35:31.760611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.606 [2024-11-19 23:35:31.760660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.606 [2024-11-19 23:35:31.760664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.864 I/O targets: 00:10:57.864 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:57.864 00:10:57.864 00:10:57.864 CUnit - A unit testing framework for C - Version 2.1-3 00:10:57.864 http://cunit.sourceforge.net/ 00:10:57.864 00:10:57.864 00:10:57.864 Suite: bdevio tests on: Nvme1n1 00:10:57.864 Test: blockdev write read block ...passed 00:10:57.864 Test: blockdev write zeroes read block ...passed 00:10:57.864 Test: blockdev write zeroes read no split ...passed 00:10:57.864 Test: blockdev write zeroes read split ...passed 00:10:57.864 Test: blockdev write zeroes read split partial ...passed 00:10:57.864 Test: blockdev reset ...[2024-11-19 23:35:32.135998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:57.864 [2024-11-19 23:35:32.136115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1968b70 (9): Bad file descriptor 00:10:58.122 [2024-11-19 23:35:32.189872] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:58.122 passed 00:10:58.122 Test: blockdev write read 8 blocks ...passed 00:10:58.122 Test: blockdev write read size > 128k ...passed 00:10:58.122 Test: blockdev write read invalid size ...passed 00:10:58.122 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:58.122 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:58.122 Test: blockdev write read max offset ...passed 00:10:58.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:58.122 Test: blockdev writev readv 8 blocks ...passed 00:10:58.122 Test: blockdev writev readv 30 x 1block ...passed 00:10:58.122 Test: blockdev writev readv block ...passed 00:10:58.122 Test: blockdev writev readv size > 128k ...passed 00:10:58.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:58.122 Test: blockdev comparev and writev ...[2024-11-19 23:35:32.362475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.122 [2024-11-19 23:35:32.362510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:58.122 [2024-11-19 23:35:32.362536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.122 [2024-11-19 23:35:32.362553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:58.122 [2024-11-19 23:35:32.362902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.122 [2024-11-19 23:35:32.362927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:58.122 [2024-11-19 23:35:32.362949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.122 [2024-11-19 23:35:32.362974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:58.122 [2024-11-19 23:35:32.363320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.122 [2024-11-19 23:35:32.363345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:58.122 [2024-11-19 23:35:32.363366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.122 [2024-11-19 23:35:32.363382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:58.123 [2024-11-19 23:35:32.363711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.123 [2024-11-19 23:35:32.363735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:58.123 [2024-11-19 23:35:32.363756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:58.123 [2024-11-19 23:35:32.363772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:58.123 passed 00:10:58.381 Test: blockdev nvme passthru rw ...passed 00:10:58.381 Test: blockdev nvme passthru vendor specific ...[2024-11-19 23:35:32.446359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:58.381 [2024-11-19 23:35:32.446387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:58.381 [2024-11-19 23:35:32.446534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:58.381 [2024-11-19 23:35:32.446558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:58.381 [2024-11-19 23:35:32.446705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:58.381 [2024-11-19 23:35:32.446729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:58.381 [2024-11-19 23:35:32.446889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:58.381 [2024-11-19 23:35:32.446913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:58.381 passed 00:10:58.381 Test: blockdev nvme admin passthru ...passed 00:10:58.381 Test: blockdev copy ...passed 00:10:58.381 00:10:58.381 Run Summary: Type Total Ran Passed Failed Inactive 00:10:58.381 suites 1 1 n/a 0 0 00:10:58.381 tests 23 23 23 0 0 00:10:58.381 asserts 152 152 152 0 n/a 00:10:58.381 00:10:58.381 Elapsed time = 1.133 seconds 00:10:58.381 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.381 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.381 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.381 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.381 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:58.381 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:58.381 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:58.382 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:58.382 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:58.382 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:58.382 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:58.382 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:58.639 rmmod nvme_tcp 00:10:58.639 rmmod nvme_fabrics 00:10:58.639 rmmod nvme_keyring 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 97439 ']' 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 97439 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 97439 ']' 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 97439 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97439 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97439' 00:10:58.639 killing process with pid 97439 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 97439 00:10:58.639 23:35:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 97439 00:10:58.898 23:35:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:58.898 23:35:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:58.898 23:35:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:58.898 23:35:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:58.898 23:35:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:58.898 23:35:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:58.898 23:35:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:58.898 23:35:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.898 23:35:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.898 23:35:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.898 23:35:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.898 23:35:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.802 23:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:00.802 00:11:00.802 real 0m6.301s 00:11:00.802 user 0m9.417s 00:11:00.802 sys 0m2.210s 00:11:00.802 23:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.802 23:35:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.802 ************************************ 00:11:00.802 END TEST nvmf_bdevio 00:11:00.802 ************************************ 00:11:00.802 23:35:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:00.802 00:11:00.802 real 3m53.846s 00:11:00.802 user 10m11.269s 00:11:00.802 sys 1m6.969s 00:11:00.802 23:35:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.802 23:35:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.802 ************************************ 00:11:00.802 END TEST nvmf_target_core 00:11:00.802 ************************************ 00:11:01.062 23:35:35 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:01.062 23:35:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.062 23:35:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.062 23:35:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:01.062 ************************************ 00:11:01.062 START TEST nvmf_target_extra 00:11:01.062 ************************************ 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:01.062 * Looking for test storage... 00:11:01.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.062 --rc genhtml_branch_coverage=1 00:11:01.062 --rc genhtml_function_coverage=1 00:11:01.062 --rc genhtml_legend=1 00:11:01.062 --rc geninfo_all_blocks=1 00:11:01.062 --rc geninfo_unexecuted_blocks=1 00:11:01.062 00:11:01.062 ' 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.062 --rc genhtml_branch_coverage=1 00:11:01.062 --rc genhtml_function_coverage=1 00:11:01.062 --rc genhtml_legend=1 00:11:01.062 --rc geninfo_all_blocks=1 00:11:01.062 --rc geninfo_unexecuted_blocks=1 00:11:01.062 00:11:01.062 ' 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.062 --rc genhtml_branch_coverage=1 00:11:01.062 --rc genhtml_function_coverage=1 00:11:01.062 --rc genhtml_legend=1 00:11:01.062 --rc geninfo_all_blocks=1 00:11:01.062 --rc geninfo_unexecuted_blocks=1 00:11:01.062 00:11:01.062 ' 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.062 --rc genhtml_branch_coverage=1 00:11:01.062 --rc genhtml_function_coverage=1 00:11:01.062 --rc genhtml_legend=1 00:11:01.062 --rc geninfo_all_blocks=1 00:11:01.062 --rc geninfo_unexecuted_blocks=1 00:11:01.062 00:11:01.062 ' 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.062 23:35:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:01.063 ************************************ 00:11:01.063 START TEST nvmf_example 00:11:01.063 ************************************ 00:11:01.063 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:01.322 * Looking for test storage... 00:11:01.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.322 --rc genhtml_branch_coverage=1 00:11:01.322 --rc genhtml_function_coverage=1 00:11:01.322 --rc genhtml_legend=1 00:11:01.322 --rc geninfo_all_blocks=1 00:11:01.322 --rc geninfo_unexecuted_blocks=1 00:11:01.322 00:11:01.322 ' 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.322 --rc genhtml_branch_coverage=1 00:11:01.322 --rc genhtml_function_coverage=1 00:11:01.322 --rc genhtml_legend=1 00:11:01.322 --rc geninfo_all_blocks=1 00:11:01.322 --rc geninfo_unexecuted_blocks=1 00:11:01.322 00:11:01.322 ' 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.322 --rc genhtml_branch_coverage=1 00:11:01.322 --rc genhtml_function_coverage=1 00:11:01.322 --rc genhtml_legend=1 00:11:01.322 --rc geninfo_all_blocks=1 00:11:01.322 --rc geninfo_unexecuted_blocks=1 00:11:01.322 00:11:01.322 ' 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.322 --rc genhtml_branch_coverage=1 00:11:01.322 --rc genhtml_function_coverage=1 00:11:01.322 --rc genhtml_legend=1 00:11:01.322 --rc geninfo_all_blocks=1 00:11:01.322 --rc geninfo_unexecuted_blocks=1 00:11:01.322 00:11:01.322 ' 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.322 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.323 23:35:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:03.225 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:03.225 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:03.225 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.225 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:03.225 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.226 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.484 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.484 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.484 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:03.484 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.484 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.484 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:03.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:11:03.485 00:11:03.485 --- 10.0.0.2 ping statistics --- 00:11:03.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.485 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:11:03.485 00:11:03.485 --- 10.0.0.1 ping statistics --- 00:11:03.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.485 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=99700 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 99700 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 99700 ']' 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.485 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.743 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.743 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:03.743 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:03.743 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.743 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.743 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.743 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.743 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.743 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.743 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:03.743 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.743 23:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.743 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.743 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:03.743 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:03.743 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.743 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.743 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.743 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:03.743 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:03.743 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.743 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:04.001 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.001 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.001 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.001 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:04.001 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.001 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:04.001 23:35:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:14.103 Initializing NVMe Controllers 00:11:14.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:14.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:14.103 Initialization complete. Launching workers. 00:11:14.103 ======================================================== 00:11:14.103 Latency(us) 00:11:14.103 Device Information : IOPS MiB/s Average min max 00:11:14.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15249.05 59.57 4197.72 843.48 15765.72 00:11:14.103 ======================================================== 00:11:14.103 Total : 15249.05 59.57 4197.72 843.48 15765.72 00:11:14.103 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:14.103 rmmod nvme_tcp 00:11:14.103 rmmod nvme_fabrics 00:11:14.103 rmmod nvme_keyring 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 99700 ']' 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 99700 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 99700 ']' 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 99700 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99700 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99700' 00:11:14.103 killing process with pid 99700 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 99700 00:11:14.103 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 99700 00:11:14.363 nvmf threads initialize successfully 00:11:14.363 bdev subsystem init successfully 00:11:14.363 created a nvmf target service 00:11:14.363 create targets's poll groups done 00:11:14.363 all subsystems of target started 00:11:14.363 nvmf target is running 00:11:14.363 all subsystems of target stopped 00:11:14.363 destroy targets's poll groups done 00:11:14.363 destroyed the nvmf target service 00:11:14.363 bdev subsystem finish successfully 00:11:14.363 nvmf threads destroy successfully 00:11:14.363 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.363 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:14.363 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:14.363 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:14.363 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:14.363 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:14.363 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:14.363 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.363 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:14.363 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.363 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.363 23:35:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.897 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:16.897 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:16.897 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.897 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.897 00:11:16.897 real 0m15.327s 00:11:16.897 user 0m42.394s 00:11:16.897 sys 0m3.229s 00:11:16.897 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.897 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.897 ************************************ 00:11:16.897 END TEST nvmf_example 00:11:16.897 ************************************ 00:11:16.897 23:35:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:16.897 23:35:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.897 23:35:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.897 23:35:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.898 ************************************ 00:11:16.898 START TEST nvmf_filesystem 00:11:16.898 ************************************ 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:16.898 * Looking for test storage... 00:11:16.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:16.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.898 --rc genhtml_branch_coverage=1 00:11:16.898 --rc genhtml_function_coverage=1 00:11:16.898 --rc genhtml_legend=1 00:11:16.898 --rc geninfo_all_blocks=1 00:11:16.898 --rc geninfo_unexecuted_blocks=1 00:11:16.898 00:11:16.898 ' 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:16.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.898 --rc genhtml_branch_coverage=1 00:11:16.898 --rc genhtml_function_coverage=1 00:11:16.898 --rc genhtml_legend=1 00:11:16.898 --rc geninfo_all_blocks=1 00:11:16.898 --rc geninfo_unexecuted_blocks=1 00:11:16.898 00:11:16.898 ' 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:16.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.898 --rc genhtml_branch_coverage=1 00:11:16.898 --rc genhtml_function_coverage=1 00:11:16.898 --rc genhtml_legend=1 00:11:16.898 --rc geninfo_all_blocks=1 00:11:16.898 --rc geninfo_unexecuted_blocks=1 00:11:16.898 00:11:16.898 ' 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:16.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.898 --rc genhtml_branch_coverage=1 00:11:16.898 --rc genhtml_function_coverage=1 00:11:16.898 --rc genhtml_legend=1 00:11:16.898 --rc geninfo_all_blocks=1 00:11:16.898 --rc geninfo_unexecuted_blocks=1 00:11:16.898 00:11:16.898 ' 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:16.898 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:16.899 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:16.899 #define SPDK_CONFIG_H 00:11:16.899 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:16.899 #define SPDK_CONFIG_APPS 1 00:11:16.899 #define SPDK_CONFIG_ARCH native 00:11:16.899 #undef SPDK_CONFIG_ASAN 00:11:16.899 #undef SPDK_CONFIG_AVAHI 00:11:16.899 #undef SPDK_CONFIG_CET 00:11:16.899 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:16.899 #define SPDK_CONFIG_COVERAGE 1 00:11:16.899 #define SPDK_CONFIG_CROSS_PREFIX 00:11:16.899 #undef SPDK_CONFIG_CRYPTO 00:11:16.899 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:16.899 #undef SPDK_CONFIG_CUSTOMOCF 00:11:16.899 #undef SPDK_CONFIG_DAOS 00:11:16.899 #define SPDK_CONFIG_DAOS_DIR 00:11:16.899 #define SPDK_CONFIG_DEBUG 1 00:11:16.899 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:16.899 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:16.899 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:16.899 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:16.899 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:16.899 #undef SPDK_CONFIG_DPDK_UADK 00:11:16.899 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:16.899 #define SPDK_CONFIG_EXAMPLES 1 00:11:16.899 #undef SPDK_CONFIG_FC 00:11:16.899 #define SPDK_CONFIG_FC_PATH 00:11:16.899 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:16.899 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:16.899 #define SPDK_CONFIG_FSDEV 1 00:11:16.899 #undef SPDK_CONFIG_FUSE 00:11:16.899 #undef SPDK_CONFIG_FUZZER 00:11:16.899 #define SPDK_CONFIG_FUZZER_LIB 00:11:16.899 #undef SPDK_CONFIG_GOLANG 00:11:16.899 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:16.899 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:16.899 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:16.899 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:16.899 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:16.899 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:16.899 #undef SPDK_CONFIG_HAVE_LZ4 00:11:16.899 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:16.899 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:16.899 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:16.899 #define SPDK_CONFIG_IDXD 1 00:11:16.899 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:16.899 #undef SPDK_CONFIG_IPSEC_MB 00:11:16.899 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:16.899 #define SPDK_CONFIG_ISAL 1 00:11:16.899 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:16.899 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:16.899 #define SPDK_CONFIG_LIBDIR 00:11:16.899 #undef SPDK_CONFIG_LTO 00:11:16.899 #define SPDK_CONFIG_MAX_LCORES 128 00:11:16.899 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:16.899 #define SPDK_CONFIG_NVME_CUSE 1 00:11:16.899 #undef SPDK_CONFIG_OCF 00:11:16.899 #define SPDK_CONFIG_OCF_PATH 00:11:16.899 #define SPDK_CONFIG_OPENSSL_PATH 00:11:16.899 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:16.899 #define SPDK_CONFIG_PGO_DIR 00:11:16.899 #undef SPDK_CONFIG_PGO_USE 00:11:16.899 #define SPDK_CONFIG_PREFIX /usr/local 00:11:16.899 #undef SPDK_CONFIG_RAID5F 00:11:16.899 #undef SPDK_CONFIG_RBD 00:11:16.899 #define SPDK_CONFIG_RDMA 1 00:11:16.899 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:16.899 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:16.900 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:16.900 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:16.900 #define SPDK_CONFIG_SHARED 1 00:11:16.900 #undef SPDK_CONFIG_SMA 00:11:16.900 #define SPDK_CONFIG_TESTS 1 00:11:16.900 #undef SPDK_CONFIG_TSAN 00:11:16.900 #define SPDK_CONFIG_UBLK 1 00:11:16.900 #define SPDK_CONFIG_UBSAN 1 00:11:16.900 #undef SPDK_CONFIG_UNIT_TESTS 00:11:16.900 #undef SPDK_CONFIG_URING 00:11:16.900 #define SPDK_CONFIG_URING_PATH 00:11:16.900 #undef SPDK_CONFIG_URING_ZNS 00:11:16.900 #undef SPDK_CONFIG_USDT 00:11:16.900 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:16.900 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:16.900 #define SPDK_CONFIG_VFIO_USER 1 00:11:16.900 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:16.900 #define SPDK_CONFIG_VHOST 1 00:11:16.900 #define SPDK_CONFIG_VIRTIO 1 00:11:16.900 #undef SPDK_CONFIG_VTUNE 00:11:16.900 #define SPDK_CONFIG_VTUNE_DIR 00:11:16.900 #define SPDK_CONFIG_WERROR 1 00:11:16.900 #define SPDK_CONFIG_WPDK_DIR 00:11:16.900 #undef SPDK_CONFIG_XNVME 00:11:16.900 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:16.900 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.901 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 101302 ]] 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 101302 00:11:16.902 23:35:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.x5EEBn 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.x5EEBn/tests/target /tmp/spdk.x5EEBn 00:11:16.902 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=53189300224 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988528128 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=8799227904 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982897664 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375269376 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22437888 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993809408 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=454656 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:16.903 * Looking for test storage... 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=53189300224 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=11013820416 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.903 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:16.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.904 --rc genhtml_branch_coverage=1 00:11:16.904 --rc genhtml_function_coverage=1 00:11:16.904 --rc genhtml_legend=1 00:11:16.904 --rc geninfo_all_blocks=1 00:11:16.904 --rc geninfo_unexecuted_blocks=1 00:11:16.904 00:11:16.904 ' 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:16.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.904 --rc genhtml_branch_coverage=1 00:11:16.904 --rc genhtml_function_coverage=1 00:11:16.904 --rc genhtml_legend=1 00:11:16.904 --rc geninfo_all_blocks=1 00:11:16.904 --rc geninfo_unexecuted_blocks=1 00:11:16.904 00:11:16.904 ' 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:16.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.904 --rc genhtml_branch_coverage=1 00:11:16.904 --rc genhtml_function_coverage=1 00:11:16.904 --rc genhtml_legend=1 00:11:16.904 --rc geninfo_all_blocks=1 00:11:16.904 --rc geninfo_unexecuted_blocks=1 00:11:16.904 00:11:16.904 ' 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:16.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.904 --rc genhtml_branch_coverage=1 00:11:16.904 --rc genhtml_function_coverage=1 00:11:16.904 --rc genhtml_legend=1 00:11:16.904 --rc geninfo_all_blocks=1 00:11:16.904 --rc geninfo_unexecuted_blocks=1 00:11:16.904 00:11:16.904 ' 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:16.904 23:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:19.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:19.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.456 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:19.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:19.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:19.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:11:19.457 00:11:19.457 --- 10.0.0.2 ping statistics --- 00:11:19.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.457 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:11:19.457 00:11:19.457 --- 10.0.0.1 ping statistics --- 00:11:19.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.457 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.457 ************************************ 00:11:19.457 START TEST nvmf_filesystem_no_in_capsule 00:11:19.457 ************************************ 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=102954 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 102954 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 102954 ']' 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.457 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.457 [2024-11-19 23:35:53.562769] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:11:19.457 [2024-11-19 23:35:53.562847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.457 [2024-11-19 23:35:53.636787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.457 [2024-11-19 23:35:53.686425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.457 [2024-11-19 23:35:53.686477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.457 [2024-11-19 23:35:53.686491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.457 [2024-11-19 23:35:53.686502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.457 [2024-11-19 23:35:53.686511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.457 [2024-11-19 23:35:53.688027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.457 [2024-11-19 23:35:53.688095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.457 [2024-11-19 23:35:53.688160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.457 [2024-11-19 23:35:53.688162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.716 [2024-11-19 23:35:53.830577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.716 23:35:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.716 Malloc1 00:11:19.716 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.716 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:19.716 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.716 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.716 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.716 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.716 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.716 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.716 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.716 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.716 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.716 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.716 [2024-11-19 23:35:54.023446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.974 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.974 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:19.974 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:19.974 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:19.975 { 00:11:19.975 "name": "Malloc1", 00:11:19.975 "aliases": [ 00:11:19.975 "f43f246c-f2f8-413e-9651-73732957481a" 00:11:19.975 ], 00:11:19.975 "product_name": "Malloc disk", 00:11:19.975 "block_size": 512, 00:11:19.975 "num_blocks": 1048576, 00:11:19.975 "uuid": "f43f246c-f2f8-413e-9651-73732957481a", 00:11:19.975 "assigned_rate_limits": { 00:11:19.975 "rw_ios_per_sec": 0, 00:11:19.975 "rw_mbytes_per_sec": 0, 00:11:19.975 "r_mbytes_per_sec": 0, 00:11:19.975 "w_mbytes_per_sec": 0 00:11:19.975 }, 00:11:19.975 "claimed": true, 00:11:19.975 "claim_type": "exclusive_write", 00:11:19.975 "zoned": false, 00:11:19.975 "supported_io_types": { 00:11:19.975 "read": true, 00:11:19.975 "write": true, 00:11:19.975 "unmap": true, 00:11:19.975 "flush": true, 00:11:19.975 "reset": true, 00:11:19.975 "nvme_admin": false, 00:11:19.975 "nvme_io": false, 00:11:19.975 "nvme_io_md": false, 00:11:19.975 "write_zeroes": true, 00:11:19.975 "zcopy": true, 00:11:19.975 "get_zone_info": false, 00:11:19.975 "zone_management": false, 00:11:19.975 "zone_append": false, 00:11:19.975 "compare": false, 00:11:19.975 "compare_and_write": false, 00:11:19.975 "abort": true, 00:11:19.975 "seek_hole": false, 00:11:19.975 "seek_data": false, 00:11:19.975 "copy": true, 00:11:19.975 "nvme_iov_md": false 00:11:19.975 }, 00:11:19.975 "memory_domains": [ 00:11:19.975 { 00:11:19.975 "dma_device_id": "system", 00:11:19.975 "dma_device_type": 1 00:11:19.975 }, 00:11:19.975 { 00:11:19.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.975 "dma_device_type": 2 00:11:19.975 } 00:11:19.975 ], 00:11:19.975 "driver_specific": {} 00:11:19.975 } 00:11:19.975 ]' 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:19.975 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:20.540 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:20.540 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:20.540 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:20.540 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:20.540 23:35:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:23.065 23:35:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:23.321 23:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.254 ************************************ 00:11:24.254 START TEST filesystem_ext4 00:11:24.254 ************************************ 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:24.254 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:24.254 mke2fs 1.47.0 (5-Feb-2023) 00:11:24.512 Discarding device blocks: 0/522240 done 00:11:24.512 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:24.512 Filesystem UUID: fcdf0b8e-5f46-469c-99e3-523dd713d851 00:11:24.512 Superblock backups stored on blocks: 00:11:24.512 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:24.512 00:11:24.512 Allocating group tables: 0/64 done 00:11:24.512 Writing inode tables: 0/64 done 00:11:24.512 Creating journal (8192 blocks): done 00:11:24.512 Writing superblocks and filesystem accounting information: 0/64 done 00:11:24.512 00:11:24.512 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:24.512 23:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 102954 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:29.779 00:11:29.779 real 0m5.455s 00:11:29.779 user 0m0.014s 00:11:29.779 sys 0m0.064s 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.779 23:36:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:29.779 ************************************ 00:11:29.779 END TEST filesystem_ext4 00:11:29.779 ************************************ 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.779 ************************************ 00:11:29.779 START TEST filesystem_btrfs 00:11:29.779 ************************************ 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:29.779 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:30.037 btrfs-progs v6.8.1 00:11:30.037 See https://btrfs.readthedocs.io for more information. 00:11:30.037 00:11:30.037 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:30.037 NOTE: several default settings have changed in version 5.15, please make sure 00:11:30.037 this does not affect your deployments: 00:11:30.037 - DUP for metadata (-m dup) 00:11:30.037 - enabled no-holes (-O no-holes) 00:11:30.037 - enabled free-space-tree (-R free-space-tree) 00:11:30.037 00:11:30.037 Label: (null) 00:11:30.037 UUID: 8fc792e3-7586-40e6-bf74-8fcd6718f336 00:11:30.037 Node size: 16384 00:11:30.037 Sector size: 4096 (CPU page size: 4096) 00:11:30.037 Filesystem size: 510.00MiB 00:11:30.037 Block group profiles: 00:11:30.037 Data: single 8.00MiB 00:11:30.037 Metadata: DUP 32.00MiB 00:11:30.037 System: DUP 8.00MiB 00:11:30.037 SSD detected: yes 00:11:30.037 Zoned device: no 00:11:30.037 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:30.037 Checksum: crc32c 00:11:30.037 Number of devices: 1 00:11:30.037 Devices: 00:11:30.037 ID SIZE PATH 00:11:30.037 1 510.00MiB /dev/nvme0n1p1 00:11:30.037 00:11:30.037 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:30.037 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 102954 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.603 00:11:30.603 real 0m0.827s 00:11:30.603 user 0m0.019s 00:11:30.603 sys 0m0.096s 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:30.603 ************************************ 00:11:30.603 END TEST filesystem_btrfs 00:11:30.603 ************************************ 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.603 ************************************ 00:11:30.603 START TEST filesystem_xfs 00:11:30.603 ************************************ 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:30.603 23:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:30.861 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:30.861 = sectsz=512 attr=2, projid32bit=1 00:11:30.861 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:30.861 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:30.861 data = bsize=4096 blocks=130560, imaxpct=25 00:11:30.861 = sunit=0 swidth=0 blks 00:11:30.861 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:30.861 log =internal log bsize=4096 blocks=16384, version=2 00:11:30.861 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:30.861 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:31.793 Discarding blocks...Done. 00:11:31.793 23:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:31.793 23:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 102954 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:34.319 00:11:34.319 real 0m3.345s 00:11:34.319 user 0m0.016s 00:11:34.319 sys 0m0.064s 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:34.319 ************************************ 00:11:34.319 END TEST filesystem_xfs 00:11:34.319 ************************************ 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:34.319 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 102954 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 102954 ']' 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 102954 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102954 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102954' 00:11:34.320 killing process with pid 102954 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 102954 00:11:34.320 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 102954 00:11:34.578 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:34.578 00:11:34.578 real 0m15.333s 00:11:34.578 user 0m59.282s 00:11:34.578 sys 0m2.029s 00:11:34.578 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.578 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.578 ************************************ 00:11:34.578 END TEST nvmf_filesystem_no_in_capsule 00:11:34.578 ************************************ 00:11:34.578 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:34.578 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.578 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.578 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.837 ************************************ 00:11:34.837 START TEST nvmf_filesystem_in_capsule 00:11:34.837 ************************************ 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=105643 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 105643 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 105643 ']' 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.837 23:36:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.837 [2024-11-19 23:36:08.943990] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:11:34.837 [2024-11-19 23:36:08.944088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.837 [2024-11-19 23:36:09.017007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.837 [2024-11-19 23:36:09.065045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.837 [2024-11-19 23:36:09.065133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.837 [2024-11-19 23:36:09.065163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.837 [2024-11-19 23:36:09.065174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.837 [2024-11-19 23:36:09.065184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.837 [2024-11-19 23:36:09.066796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.837 [2024-11-19 23:36:09.066862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.837 [2024-11-19 23:36:09.066993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.837 [2024-11-19 23:36:09.066995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.095 [2024-11-19 23:36:09.216347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.095 Malloc1 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.095 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.352 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.353 [2024-11-19 23:36:09.409499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:35.353 { 00:11:35.353 "name": "Malloc1", 00:11:35.353 "aliases": [ 00:11:35.353 "fd06b672-d00e-4d69-b266-6b971172f7d3" 00:11:35.353 ], 00:11:35.353 "product_name": "Malloc disk", 00:11:35.353 "block_size": 512, 00:11:35.353 "num_blocks": 1048576, 00:11:35.353 "uuid": "fd06b672-d00e-4d69-b266-6b971172f7d3", 00:11:35.353 "assigned_rate_limits": { 00:11:35.353 "rw_ios_per_sec": 0, 00:11:35.353 "rw_mbytes_per_sec": 0, 00:11:35.353 "r_mbytes_per_sec": 0, 00:11:35.353 "w_mbytes_per_sec": 0 00:11:35.353 }, 00:11:35.353 "claimed": true, 00:11:35.353 "claim_type": "exclusive_write", 00:11:35.353 "zoned": false, 00:11:35.353 "supported_io_types": { 00:11:35.353 "read": true, 00:11:35.353 "write": true, 00:11:35.353 "unmap": true, 00:11:35.353 "flush": true, 00:11:35.353 "reset": true, 00:11:35.353 "nvme_admin": false, 00:11:35.353 "nvme_io": false, 00:11:35.353 "nvme_io_md": false, 00:11:35.353 "write_zeroes": true, 00:11:35.353 "zcopy": true, 00:11:35.353 "get_zone_info": false, 00:11:35.353 "zone_management": false, 00:11:35.353 "zone_append": false, 00:11:35.353 "compare": false, 00:11:35.353 "compare_and_write": false, 00:11:35.353 "abort": true, 00:11:35.353 "seek_hole": false, 00:11:35.353 "seek_data": false, 00:11:35.353 "copy": true, 00:11:35.353 "nvme_iov_md": false 00:11:35.353 }, 00:11:35.353 "memory_domains": [ 00:11:35.353 { 00:11:35.353 "dma_device_id": "system", 00:11:35.353 "dma_device_type": 1 00:11:35.353 }, 00:11:35.353 { 00:11:35.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.353 "dma_device_type": 2 00:11:35.353 } 00:11:35.353 ], 00:11:35.353 "driver_specific": {} 00:11:35.353 } 00:11:35.353 ]' 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:35.353 23:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:35.918 23:36:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:35.918 23:36:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:35.918 23:36:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.918 23:36:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:35.918 23:36:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:37.815 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:38.073 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:38.330 23:36:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:39.262 23:36:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:40.196 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:40.196 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:40.196 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:40.196 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.196 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.196 ************************************ 00:11:40.196 START TEST filesystem_in_capsule_ext4 00:11:40.196 ************************************ 00:11:40.454 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:40.454 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:40.454 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:40.454 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:40.454 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:40.454 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:40.454 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:40.454 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:40.454 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:40.454 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:40.454 23:36:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:40.454 mke2fs 1.47.0 (5-Feb-2023) 00:11:40.454 Discarding device blocks: 0/522240 done 00:11:40.454 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:40.454 Filesystem UUID: 5be53c83-122b-478f-8517-ecabcd5f1af8 00:11:40.454 Superblock backups stored on blocks: 00:11:40.454 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:40.454 00:11:40.454 Allocating group tables: 0/64 done 00:11:40.454 Writing inode tables: 0/64 done 00:11:40.454 Creating journal (8192 blocks): done 00:11:42.759 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:11:42.759 00:11:42.759 23:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:42.759 23:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.018 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.275 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:48.275 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.275 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:48.275 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:48.275 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 105643 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.276 00:11:48.276 real 0m7.888s 00:11:48.276 user 0m0.021s 00:11:48.276 sys 0m0.060s 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:48.276 ************************************ 00:11:48.276 END TEST filesystem_in_capsule_ext4 00:11:48.276 ************************************ 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.276 ************************************ 00:11:48.276 START TEST filesystem_in_capsule_btrfs 00:11:48.276 ************************************ 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:48.276 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:48.539 btrfs-progs v6.8.1 00:11:48.539 See https://btrfs.readthedocs.io for more information. 00:11:48.539 00:11:48.539 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:48.539 NOTE: several default settings have changed in version 5.15, please make sure 00:11:48.539 this does not affect your deployments: 00:11:48.539 - DUP for metadata (-m dup) 00:11:48.539 - enabled no-holes (-O no-holes) 00:11:48.539 - enabled free-space-tree (-R free-space-tree) 00:11:48.539 00:11:48.539 Label: (null) 00:11:48.539 UUID: 32908e9b-e05d-447c-8774-1ddb871d644f 00:11:48.539 Node size: 16384 00:11:48.539 Sector size: 4096 (CPU page size: 4096) 00:11:48.539 Filesystem size: 510.00MiB 00:11:48.539 Block group profiles: 00:11:48.539 Data: single 8.00MiB 00:11:48.539 Metadata: DUP 32.00MiB 00:11:48.539 System: DUP 8.00MiB 00:11:48.539 SSD detected: yes 00:11:48.539 Zoned device: no 00:11:48.539 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:48.539 Checksum: crc32c 00:11:48.539 Number of devices: 1 00:11:48.539 Devices: 00:11:48.539 ID SIZE PATH 00:11:48.539 1 510.00MiB /dev/nvme0n1p1 00:11:48.539 00:11:48.539 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:48.539 23:36:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 105643 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.472 00:11:49.472 real 0m1.318s 00:11:49.472 user 0m0.015s 00:11:49.472 sys 0m0.108s 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.472 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.472 ************************************ 00:11:49.472 END TEST filesystem_in_capsule_btrfs 00:11:49.472 ************************************ 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.730 ************************************ 00:11:49.730 START TEST filesystem_in_capsule_xfs 00:11:49.730 ************************************ 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:49.730 23:36:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:49.730 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:49.730 = sectsz=512 attr=2, projid32bit=1 00:11:49.730 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:49.730 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:49.730 data = bsize=4096 blocks=130560, imaxpct=25 00:11:49.730 = sunit=0 swidth=0 blks 00:11:49.730 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:49.730 log =internal log bsize=4096 blocks=16384, version=2 00:11:49.730 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:49.730 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:50.662 Discarding blocks...Done. 00:11:50.662 23:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:50.662 23:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 105643 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.202 00:11:53.202 real 0m3.504s 00:11:53.202 user 0m0.020s 00:11:53.202 sys 0m0.061s 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:53.202 ************************************ 00:11:53.202 END TEST filesystem_in_capsule_xfs 00:11:53.202 ************************************ 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 105643 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 105643 ']' 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 105643 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:53.202 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.203 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105643 00:11:53.500 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.500 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.500 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105643' 00:11:53.500 killing process with pid 105643 00:11:53.500 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 105643 00:11:53.500 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 105643 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:53.759 00:11:53.759 real 0m19.038s 00:11:53.759 user 1m13.901s 00:11:53.759 sys 0m2.294s 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.759 ************************************ 00:11:53.759 END TEST nvmf_filesystem_in_capsule 00:11:53.759 ************************************ 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.759 rmmod nvme_tcp 00:11:53.759 rmmod nvme_fabrics 00:11:53.759 rmmod nvme_keyring 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:53.759 23:36:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:53.759 23:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:53.759 23:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:53.759 23:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:53.759 23:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.760 23:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.760 23:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.760 23:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.760 23:36:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.294 00:11:56.294 real 0m39.309s 00:11:56.294 user 2m14.321s 00:11:56.294 sys 0m6.147s 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:56.294 ************************************ 00:11:56.294 END TEST nvmf_filesystem 00:11:56.294 ************************************ 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.294 ************************************ 00:11:56.294 START TEST nvmf_target_discovery 00:11:56.294 ************************************ 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:56.294 * Looking for test storage... 00:11:56.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:56.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.294 --rc genhtml_branch_coverage=1 00:11:56.294 --rc genhtml_function_coverage=1 00:11:56.294 --rc genhtml_legend=1 00:11:56.294 --rc geninfo_all_blocks=1 00:11:56.294 --rc geninfo_unexecuted_blocks=1 00:11:56.294 00:11:56.294 ' 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:56.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.294 --rc genhtml_branch_coverage=1 00:11:56.294 --rc genhtml_function_coverage=1 00:11:56.294 --rc genhtml_legend=1 00:11:56.294 --rc geninfo_all_blocks=1 00:11:56.294 --rc geninfo_unexecuted_blocks=1 00:11:56.294 00:11:56.294 ' 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:56.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.294 --rc genhtml_branch_coverage=1 00:11:56.294 --rc genhtml_function_coverage=1 00:11:56.294 --rc genhtml_legend=1 00:11:56.294 --rc geninfo_all_blocks=1 00:11:56.294 --rc geninfo_unexecuted_blocks=1 00:11:56.294 00:11:56.294 ' 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:56.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.294 --rc genhtml_branch_coverage=1 00:11:56.294 --rc genhtml_function_coverage=1 00:11:56.294 --rc genhtml_legend=1 00:11:56.294 --rc geninfo_all_blocks=1 00:11:56.294 --rc geninfo_unexecuted_blocks=1 00:11:56.294 00:11:56.294 ' 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.294 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.295 23:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:58.198 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:58.198 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:58.198 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:58.199 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:58.199 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:58.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:11:58.199 00:11:58.199 --- 10.0.0.2 ping statistics --- 00:11:58.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.199 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:11:58.199 00:11:58.199 --- 10.0.0.1 ping statistics --- 00:11:58.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.199 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=110034 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 110034 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 110034 ']' 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.199 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.199 [2024-11-19 23:36:32.503223] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:11:58.199 [2024-11-19 23:36:32.503318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.475 [2024-11-19 23:36:32.579772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.475 [2024-11-19 23:36:32.629188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.475 [2024-11-19 23:36:32.629248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.475 [2024-11-19 23:36:32.629276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.475 [2024-11-19 23:36:32.629288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.475 [2024-11-19 23:36:32.629297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.475 [2024-11-19 23:36:32.630852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.475 [2024-11-19 23:36:32.630913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.475 [2024-11-19 23:36:32.630978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.475 [2024-11-19 23:36:32.630981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.475 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.475 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:58.475 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:58.475 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:58.475 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.475 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.475 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.475 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.475 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.475 [2024-11-19 23:36:32.780990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.733 Null1 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.733 [2024-11-19 23:36:32.825404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.733 Null2 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.733 Null3 00:11:58.733 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 Null4 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.734 23:36:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:58.992 00:11:58.992 Discovery Log Number of Records 6, Generation counter 6 00:11:58.992 =====Discovery Log Entry 0====== 00:11:58.992 trtype: tcp 00:11:58.992 adrfam: ipv4 00:11:58.992 subtype: current discovery subsystem 00:11:58.992 treq: not required 00:11:58.992 portid: 0 00:11:58.992 trsvcid: 4420 00:11:58.992 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:58.992 traddr: 10.0.0.2 00:11:58.992 eflags: explicit discovery connections, duplicate discovery information 00:11:58.992 sectype: none 00:11:58.992 =====Discovery Log Entry 1====== 00:11:58.992 trtype: tcp 00:11:58.992 adrfam: ipv4 00:11:58.992 subtype: nvme subsystem 00:11:58.992 treq: not required 00:11:58.992 portid: 0 00:11:58.992 trsvcid: 4420 00:11:58.992 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:58.992 traddr: 10.0.0.2 00:11:58.992 eflags: none 00:11:58.992 sectype: none 00:11:58.992 =====Discovery Log Entry 2====== 00:11:58.992 trtype: tcp 00:11:58.992 adrfam: ipv4 00:11:58.992 subtype: nvme subsystem 00:11:58.992 treq: not required 00:11:58.992 portid: 0 00:11:58.992 trsvcid: 4420 00:11:58.992 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:58.993 traddr: 10.0.0.2 00:11:58.993 eflags: none 00:11:58.993 sectype: none 00:11:58.993 =====Discovery Log Entry 3====== 00:11:58.993 trtype: tcp 00:11:58.993 adrfam: ipv4 00:11:58.993 subtype: nvme subsystem 00:11:58.993 treq: not required 00:11:58.993 portid: 0 00:11:58.993 trsvcid: 4420 00:11:58.993 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:58.993 traddr: 10.0.0.2 00:11:58.993 eflags: none 00:11:58.993 sectype: none 00:11:58.993 =====Discovery Log Entry 4====== 00:11:58.993 trtype: tcp 00:11:58.993 adrfam: ipv4 00:11:58.993 subtype: nvme subsystem 00:11:58.993 treq: not required 00:11:58.993 portid: 0 00:11:58.993 trsvcid: 4420 00:11:58.993 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:58.993 traddr: 10.0.0.2 00:11:58.993 eflags: none 00:11:58.993 sectype: none 00:11:58.993 =====Discovery Log Entry 5====== 00:11:58.993 trtype: tcp 00:11:58.993 adrfam: ipv4 00:11:58.993 subtype: discovery subsystem referral 00:11:58.993 treq: not required 00:11:58.993 portid: 0 00:11:58.993 trsvcid: 4430 00:11:58.993 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:58.993 traddr: 10.0.0.2 00:11:58.993 eflags: none 00:11:58.993 sectype: none 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:58.993 Perform nvmf subsystem discovery via RPC 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.993 [ 00:11:58.993 { 00:11:58.993 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:58.993 "subtype": "Discovery", 00:11:58.993 "listen_addresses": [ 00:11:58.993 { 00:11:58.993 "trtype": "TCP", 00:11:58.993 "adrfam": "IPv4", 00:11:58.993 "traddr": "10.0.0.2", 00:11:58.993 "trsvcid": "4420" 00:11:58.993 } 00:11:58.993 ], 00:11:58.993 "allow_any_host": true, 00:11:58.993 "hosts": [] 00:11:58.993 }, 00:11:58.993 { 00:11:58.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.993 "subtype": "NVMe", 00:11:58.993 "listen_addresses": [ 00:11:58.993 { 00:11:58.993 "trtype": "TCP", 00:11:58.993 "adrfam": "IPv4", 00:11:58.993 "traddr": "10.0.0.2", 00:11:58.993 "trsvcid": "4420" 00:11:58.993 } 00:11:58.993 ], 00:11:58.993 "allow_any_host": true, 00:11:58.993 "hosts": [], 00:11:58.993 "serial_number": "SPDK00000000000001", 00:11:58.993 "model_number": "SPDK bdev Controller", 00:11:58.993 "max_namespaces": 32, 00:11:58.993 "min_cntlid": 1, 00:11:58.993 "max_cntlid": 65519, 00:11:58.993 "namespaces": [ 00:11:58.993 { 00:11:58.993 "nsid": 1, 00:11:58.993 "bdev_name": "Null1", 00:11:58.993 "name": "Null1", 00:11:58.993 "nguid": "E6B52DCEBE6D4C218036EE03BBF1AE69", 00:11:58.993 "uuid": "e6b52dce-be6d-4c21-8036-ee03bbf1ae69" 00:11:58.993 } 00:11:58.993 ] 00:11:58.993 }, 00:11:58.993 { 00:11:58.993 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:58.993 "subtype": "NVMe", 00:11:58.993 "listen_addresses": [ 00:11:58.993 { 00:11:58.993 "trtype": "TCP", 00:11:58.993 "adrfam": "IPv4", 00:11:58.993 "traddr": "10.0.0.2", 00:11:58.993 "trsvcid": "4420" 00:11:58.993 } 00:11:58.993 ], 00:11:58.993 "allow_any_host": true, 00:11:58.993 "hosts": [], 00:11:58.993 "serial_number": "SPDK00000000000002", 00:11:58.993 "model_number": "SPDK bdev Controller", 00:11:58.993 "max_namespaces": 32, 00:11:58.993 "min_cntlid": 1, 00:11:58.993 "max_cntlid": 65519, 00:11:58.993 "namespaces": [ 00:11:58.993 { 00:11:58.993 "nsid": 1, 00:11:58.993 "bdev_name": "Null2", 00:11:58.993 "name": "Null2", 00:11:58.993 "nguid": "CEE9DCA37652458B9B44A7864F145EE7", 00:11:58.993 "uuid": "cee9dca3-7652-458b-9b44-a7864f145ee7" 00:11:58.993 } 00:11:58.993 ] 00:11:58.993 }, 00:11:58.993 { 00:11:58.993 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:58.993 "subtype": "NVMe", 00:11:58.993 "listen_addresses": [ 00:11:58.993 { 00:11:58.993 "trtype": "TCP", 00:11:58.993 "adrfam": "IPv4", 00:11:58.993 "traddr": "10.0.0.2", 00:11:58.993 "trsvcid": "4420" 00:11:58.993 } 00:11:58.993 ], 00:11:58.993 "allow_any_host": true, 00:11:58.993 "hosts": [], 00:11:58.993 "serial_number": "SPDK00000000000003", 00:11:58.993 "model_number": "SPDK bdev Controller", 00:11:58.993 "max_namespaces": 32, 00:11:58.993 "min_cntlid": 1, 00:11:58.993 "max_cntlid": 65519, 00:11:58.993 "namespaces": [ 00:11:58.993 { 00:11:58.993 "nsid": 1, 00:11:58.993 "bdev_name": "Null3", 00:11:58.993 "name": "Null3", 00:11:58.993 "nguid": "51E63012F91F45A2BD7DD3DE80B21E7A", 00:11:58.993 "uuid": "51e63012-f91f-45a2-bd7d-d3de80b21e7a" 00:11:58.993 } 00:11:58.993 ] 00:11:58.993 }, 00:11:58.993 { 00:11:58.993 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:58.993 "subtype": "NVMe", 00:11:58.993 "listen_addresses": [ 00:11:58.993 { 00:11:58.993 "trtype": "TCP", 00:11:58.993 "adrfam": "IPv4", 00:11:58.993 "traddr": "10.0.0.2", 00:11:58.993 "trsvcid": "4420" 00:11:58.993 } 00:11:58.993 ], 00:11:58.993 "allow_any_host": true, 00:11:58.993 "hosts": [], 00:11:58.993 "serial_number": "SPDK00000000000004", 00:11:58.993 "model_number": "SPDK bdev Controller", 00:11:58.993 "max_namespaces": 32, 00:11:58.993 "min_cntlid": 1, 00:11:58.993 "max_cntlid": 65519, 00:11:58.993 "namespaces": [ 00:11:58.993 { 00:11:58.993 "nsid": 1, 00:11:58.993 "bdev_name": "Null4", 00:11:58.993 "name": "Null4", 00:11:58.993 "nguid": "5F4C3DED49BC4614A89EC4FCDE86146C", 00:11:58.993 "uuid": "5f4c3ded-49bc-4614-a89e-c4fcde86146c" 00:11:58.993 } 00:11:58.993 ] 00:11:58.993 } 00:11:58.993 ] 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.993 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.994 rmmod nvme_tcp 00:11:58.994 rmmod nvme_fabrics 00:11:58.994 rmmod nvme_keyring 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 110034 ']' 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 110034 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 110034 ']' 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 110034 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.994 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110034 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110034' 00:11:59.252 killing process with pid 110034 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 110034 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 110034 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.252 23:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.788 00:12:01.788 real 0m5.481s 00:12:01.788 user 0m4.497s 00:12:01.788 sys 0m1.882s 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.788 ************************************ 00:12:01.788 END TEST nvmf_target_discovery 00:12:01.788 ************************************ 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.788 ************************************ 00:12:01.788 START TEST nvmf_referrals 00:12:01.788 ************************************ 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:01.788 * Looking for test storage... 00:12:01.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:01.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.788 --rc genhtml_branch_coverage=1 00:12:01.788 --rc genhtml_function_coverage=1 00:12:01.788 --rc genhtml_legend=1 00:12:01.788 --rc geninfo_all_blocks=1 00:12:01.788 --rc geninfo_unexecuted_blocks=1 00:12:01.788 00:12:01.788 ' 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:01.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.788 --rc genhtml_branch_coverage=1 00:12:01.788 --rc genhtml_function_coverage=1 00:12:01.788 --rc genhtml_legend=1 00:12:01.788 --rc geninfo_all_blocks=1 00:12:01.788 --rc geninfo_unexecuted_blocks=1 00:12:01.788 00:12:01.788 ' 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:01.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.788 --rc genhtml_branch_coverage=1 00:12:01.788 --rc genhtml_function_coverage=1 00:12:01.788 --rc genhtml_legend=1 00:12:01.788 --rc geninfo_all_blocks=1 00:12:01.788 --rc geninfo_unexecuted_blocks=1 00:12:01.788 00:12:01.788 ' 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:01.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.788 --rc genhtml_branch_coverage=1 00:12:01.788 --rc genhtml_function_coverage=1 00:12:01.788 --rc genhtml_legend=1 00:12:01.788 --rc geninfo_all_blocks=1 00:12:01.788 --rc geninfo_unexecuted_blocks=1 00:12:01.788 00:12:01.788 ' 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:01.788 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.789 23:36:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:03.693 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:03.693 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:03.693 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:03.693 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:03.693 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:12:03.694 00:12:03.694 --- 10.0.0.2 ping statistics --- 00:12:03.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.694 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:12:03.694 00:12:03.694 --- 10.0.0.1 ping statistics --- 00:12:03.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.694 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:03.694 23:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=112067 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 112067 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 112067 ']' 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.952 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.952 [2024-11-19 23:36:38.074619] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:12:03.952 [2024-11-19 23:36:38.074692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.952 [2024-11-19 23:36:38.151207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.952 [2024-11-19 23:36:38.201013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.952 [2024-11-19 23:36:38.201099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.952 [2024-11-19 23:36:38.201128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.952 [2024-11-19 23:36:38.201141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.952 [2024-11-19 23:36:38.201151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.952 [2024-11-19 23:36:38.202755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.952 [2024-11-19 23:36:38.202820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.952 [2024-11-19 23:36:38.202885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.952 [2024-11-19 23:36:38.202888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.211 [2024-11-19 23:36:38.353243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.211 [2024-11-19 23:36:38.365520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:04.211 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:04.469 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.727 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:04.728 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.728 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:04.728 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:04.728 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:04.728 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:04.728 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:04.728 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.728 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:04.728 23:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:04.997 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:04.997 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:04.997 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:04.997 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:04.997 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:04.997 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.997 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:05.256 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:05.256 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:05.256 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:05.256 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:05.256 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.256 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.514 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:05.772 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:05.772 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:05.772 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:05.772 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:05.772 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.772 23:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:06.029 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:06.287 rmmod nvme_tcp 00:12:06.287 rmmod nvme_fabrics 00:12:06.287 rmmod nvme_keyring 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 112067 ']' 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 112067 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 112067 ']' 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 112067 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112067 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112067' 00:12:06.287 killing process with pid 112067 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 112067 00:12:06.287 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 112067 00:12:06.545 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:06.545 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:06.545 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:06.545 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:06.545 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:06.545 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:06.545 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:06.545 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:06.545 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:06.545 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.545 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.545 23:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:09.087 00:12:09.087 real 0m7.151s 00:12:09.087 user 0m11.606s 00:12:09.087 sys 0m2.328s 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.087 ************************************ 00:12:09.087 END TEST nvmf_referrals 00:12:09.087 ************************************ 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.087 ************************************ 00:12:09.087 START TEST nvmf_connect_disconnect 00:12:09.087 ************************************ 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:09.087 * Looking for test storage... 00:12:09.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.087 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:09.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.088 --rc genhtml_branch_coverage=1 00:12:09.088 --rc genhtml_function_coverage=1 00:12:09.088 --rc genhtml_legend=1 00:12:09.088 --rc geninfo_all_blocks=1 00:12:09.088 --rc geninfo_unexecuted_blocks=1 00:12:09.088 00:12:09.088 ' 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:09.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.088 --rc genhtml_branch_coverage=1 00:12:09.088 --rc genhtml_function_coverage=1 00:12:09.088 --rc genhtml_legend=1 00:12:09.088 --rc geninfo_all_blocks=1 00:12:09.088 --rc geninfo_unexecuted_blocks=1 00:12:09.088 00:12:09.088 ' 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:09.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.088 --rc genhtml_branch_coverage=1 00:12:09.088 --rc genhtml_function_coverage=1 00:12:09.088 --rc genhtml_legend=1 00:12:09.088 --rc geninfo_all_blocks=1 00:12:09.088 --rc geninfo_unexecuted_blocks=1 00:12:09.088 00:12:09.088 ' 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:09.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.088 --rc genhtml_branch_coverage=1 00:12:09.088 --rc genhtml_function_coverage=1 00:12:09.088 --rc genhtml_legend=1 00:12:09.088 --rc geninfo_all_blocks=1 00:12:09.088 --rc geninfo_unexecuted_blocks=1 00:12:09.088 00:12:09.088 ' 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.088 23:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.088 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:09.089 23:36:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:10.992 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:10.993 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:10.993 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:10.993 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:10.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.993 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:12:11.253 00:12:11.253 --- 10.0.0.2 ping statistics --- 00:12:11.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.253 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:12:11.253 00:12:11.253 --- 10.0.0.1 ping statistics --- 00:12:11.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.253 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=114478 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 114478 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 114478 ']' 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.253 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.253 [2024-11-19 23:36:45.503431] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:12:11.253 [2024-11-19 23:36:45.503517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.512 [2024-11-19 23:36:45.583615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.512 [2024-11-19 23:36:45.634327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.512 [2024-11-19 23:36:45.634394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.512 [2024-11-19 23:36:45.634410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.512 [2024-11-19 23:36:45.634423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.512 [2024-11-19 23:36:45.634434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.512 [2024-11-19 23:36:45.636124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.512 [2024-11-19 23:36:45.636153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.512 [2024-11-19 23:36:45.636209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.512 [2024-11-19 23:36:45.636213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.512 [2024-11-19 23:36:45.789869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.512 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.771 [2024-11-19 23:36:45.856528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:11.771 23:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:14.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.698 [2024-11-19 23:38:53.567834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdff0 is same with the state(6) to be set 00:14:19.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.619 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:03.619 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:03.619 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:03.619 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:03.619 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:03.619 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:03.619 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:03.619 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:03.619 rmmod nvme_tcp 00:16:03.619 rmmod nvme_fabrics 00:16:03.619 rmmod nvme_keyring 00:16:03.619 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 114478 ']' 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 114478 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 114478 ']' 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 114478 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114478 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114478' 00:16:03.620 killing process with pid 114478 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 114478 00:16:03.620 23:40:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 114478 00:16:03.878 23:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:03.878 23:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:03.878 23:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:03.879 23:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:03.879 23:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:03.879 23:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:03.879 23:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:03.879 23:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:03.879 23:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:03.879 23:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.879 23:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.879 23:40:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.783 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:05.783 00:16:05.783 real 3m57.249s 00:16:05.783 user 15m3.777s 00:16:05.783 sys 0m34.793s 00:16:05.783 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.783 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:05.783 ************************************ 00:16:05.783 END TEST nvmf_connect_disconnect 00:16:05.783 ************************************ 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.043 ************************************ 00:16:06.043 START TEST nvmf_multitarget 00:16:06.043 ************************************ 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:06.043 * Looking for test storage... 00:16:06.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:06.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.043 --rc genhtml_branch_coverage=1 00:16:06.043 --rc genhtml_function_coverage=1 00:16:06.043 --rc genhtml_legend=1 00:16:06.043 --rc geninfo_all_blocks=1 00:16:06.043 --rc geninfo_unexecuted_blocks=1 00:16:06.043 00:16:06.043 ' 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:06.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.043 --rc genhtml_branch_coverage=1 00:16:06.043 --rc genhtml_function_coverage=1 00:16:06.043 --rc genhtml_legend=1 00:16:06.043 --rc geninfo_all_blocks=1 00:16:06.043 --rc geninfo_unexecuted_blocks=1 00:16:06.043 00:16:06.043 ' 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:06.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.043 --rc genhtml_branch_coverage=1 00:16:06.043 --rc genhtml_function_coverage=1 00:16:06.043 --rc genhtml_legend=1 00:16:06.043 --rc geninfo_all_blocks=1 00:16:06.043 --rc geninfo_unexecuted_blocks=1 00:16:06.043 00:16:06.043 ' 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:06.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.043 --rc genhtml_branch_coverage=1 00:16:06.043 --rc genhtml_function_coverage=1 00:16:06.043 --rc genhtml_legend=1 00:16:06.043 --rc geninfo_all_blocks=1 00:16:06.043 --rc geninfo_unexecuted_blocks=1 00:16:06.043 00:16:06.043 ' 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.043 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:06.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:06.044 23:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:08.574 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:08.574 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:08.574 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:08.575 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:08.575 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:08.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:16:08.575 00:16:08.575 --- 10.0.0.2 ping statistics --- 00:16:08.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.575 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:08.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:16:08.575 00:16:08.575 --- 10.0.0.1 ping statistics --- 00:16:08.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.575 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=145619 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 145619 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 145619 ']' 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:08.575 [2024-11-19 23:40:42.573622] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:16:08.575 [2024-11-19 23:40:42.573722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.575 [2024-11-19 23:40:42.653361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.575 [2024-11-19 23:40:42.704353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.575 [2024-11-19 23:40:42.704419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.575 [2024-11-19 23:40:42.704446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.575 [2024-11-19 23:40:42.704459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.575 [2024-11-19 23:40:42.704472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.575 [2024-11-19 23:40:42.706191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.575 [2024-11-19 23:40:42.706251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.575 [2024-11-19 23:40:42.706308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.575 [2024-11-19 23:40:42.706312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:08.575 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:08.832 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:08.832 23:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:08.832 "nvmf_tgt_1" 00:16:08.832 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:09.089 "nvmf_tgt_2" 00:16:09.089 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:09.089 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:09.089 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:09.089 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:09.347 true 00:16:09.347 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:09.347 true 00:16:09.347 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:09.347 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:09.604 rmmod nvme_tcp 00:16:09.604 rmmod nvme_fabrics 00:16:09.604 rmmod nvme_keyring 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 145619 ']' 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 145619 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 145619 ']' 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 145619 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145619 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145619' 00:16:09.604 killing process with pid 145619 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 145619 00:16:09.604 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 145619 00:16:09.862 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:09.862 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:09.862 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:09.862 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:09.862 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:09.862 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:09.862 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:09.862 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:09.862 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:09.862 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.863 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.863 23:40:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.769 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:11.769 00:16:11.769 real 0m5.873s 00:16:11.769 user 0m6.675s 00:16:11.769 sys 0m1.997s 00:16:11.769 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.769 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:11.769 ************************************ 00:16:11.769 END TEST nvmf_multitarget 00:16:11.769 ************************************ 00:16:11.769 23:40:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:11.769 23:40:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:11.769 23:40:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.769 23:40:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:11.769 ************************************ 00:16:11.769 START TEST nvmf_rpc 00:16:11.769 ************************************ 00:16:11.769 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:12.030 * Looking for test storage... 00:16:12.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.030 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:12.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.031 --rc genhtml_branch_coverage=1 00:16:12.031 --rc genhtml_function_coverage=1 00:16:12.031 --rc genhtml_legend=1 00:16:12.031 --rc geninfo_all_blocks=1 00:16:12.031 --rc geninfo_unexecuted_blocks=1 00:16:12.031 00:16:12.031 ' 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:12.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.031 --rc genhtml_branch_coverage=1 00:16:12.031 --rc genhtml_function_coverage=1 00:16:12.031 --rc genhtml_legend=1 00:16:12.031 --rc geninfo_all_blocks=1 00:16:12.031 --rc geninfo_unexecuted_blocks=1 00:16:12.031 00:16:12.031 ' 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:12.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.031 --rc genhtml_branch_coverage=1 00:16:12.031 --rc genhtml_function_coverage=1 00:16:12.031 --rc genhtml_legend=1 00:16:12.031 --rc geninfo_all_blocks=1 00:16:12.031 --rc geninfo_unexecuted_blocks=1 00:16:12.031 00:16:12.031 ' 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:12.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.031 --rc genhtml_branch_coverage=1 00:16:12.031 --rc genhtml_function_coverage=1 00:16:12.031 --rc genhtml_legend=1 00:16:12.031 --rc geninfo_all_blocks=1 00:16:12.031 --rc geninfo_unexecuted_blocks=1 00:16:12.031 00:16:12.031 ' 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:12.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:12.031 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.032 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:12.032 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:12.032 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:12.032 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.032 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.032 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.032 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:12.032 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:12.032 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:12.032 23:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:14.002 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:14.002 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:14.002 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:14.002 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:14.002 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:14.003 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:14.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:16:14.263 00:16:14.263 --- 10.0.0.2 ping statistics --- 00:16:14.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.263 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:16:14.263 00:16:14.263 --- 10.0.0.1 ping statistics --- 00:16:14.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.263 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=147732 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 147732 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 147732 ']' 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.263 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.263 [2024-11-19 23:40:48.505234] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:16:14.263 [2024-11-19 23:40:48.505316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.522 [2024-11-19 23:40:48.579539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.522 [2024-11-19 23:40:48.629476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.522 [2024-11-19 23:40:48.629539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.522 [2024-11-19 23:40:48.629553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.522 [2024-11-19 23:40:48.629566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.522 [2024-11-19 23:40:48.629576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.522 [2024-11-19 23:40:48.631222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.522 [2024-11-19 23:40:48.631278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.522 [2024-11-19 23:40:48.631347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.522 [2024-11-19 23:40:48.631350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.522 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.522 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:14.522 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:14.522 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:14.522 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.522 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.522 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:14.522 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.522 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.522 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.522 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:14.522 "tick_rate": 2700000000, 00:16:14.522 "poll_groups": [ 00:16:14.522 { 00:16:14.522 "name": "nvmf_tgt_poll_group_000", 00:16:14.522 "admin_qpairs": 0, 00:16:14.522 "io_qpairs": 0, 00:16:14.522 "current_admin_qpairs": 0, 00:16:14.522 "current_io_qpairs": 0, 00:16:14.522 "pending_bdev_io": 0, 00:16:14.522 "completed_nvme_io": 0, 00:16:14.523 "transports": [] 00:16:14.523 }, 00:16:14.523 { 00:16:14.523 "name": "nvmf_tgt_poll_group_001", 00:16:14.523 "admin_qpairs": 0, 00:16:14.523 "io_qpairs": 0, 00:16:14.523 "current_admin_qpairs": 0, 00:16:14.523 "current_io_qpairs": 0, 00:16:14.523 "pending_bdev_io": 0, 00:16:14.523 "completed_nvme_io": 0, 00:16:14.523 "transports": [] 00:16:14.523 }, 00:16:14.523 { 00:16:14.523 "name": "nvmf_tgt_poll_group_002", 00:16:14.523 "admin_qpairs": 0, 00:16:14.523 "io_qpairs": 0, 00:16:14.523 "current_admin_qpairs": 0, 00:16:14.523 "current_io_qpairs": 0, 00:16:14.523 "pending_bdev_io": 0, 00:16:14.523 "completed_nvme_io": 0, 00:16:14.523 "transports": [] 00:16:14.523 }, 00:16:14.523 { 00:16:14.523 "name": "nvmf_tgt_poll_group_003", 00:16:14.523 "admin_qpairs": 0, 00:16:14.523 "io_qpairs": 0, 00:16:14.523 "current_admin_qpairs": 0, 00:16:14.523 "current_io_qpairs": 0, 00:16:14.523 "pending_bdev_io": 0, 00:16:14.523 "completed_nvme_io": 0, 00:16:14.523 "transports": [] 00:16:14.523 } 00:16:14.523 ] 00:16:14.523 }' 00:16:14.523 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:14.523 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:14.523 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:14.523 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:14.523 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:14.523 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.782 [2024-11-19 23:40:48.867807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:14.782 "tick_rate": 2700000000, 00:16:14.782 "poll_groups": [ 00:16:14.782 { 00:16:14.782 "name": "nvmf_tgt_poll_group_000", 00:16:14.782 "admin_qpairs": 0, 00:16:14.782 "io_qpairs": 0, 00:16:14.782 "current_admin_qpairs": 0, 00:16:14.782 "current_io_qpairs": 0, 00:16:14.782 "pending_bdev_io": 0, 00:16:14.782 "completed_nvme_io": 0, 00:16:14.782 "transports": [ 00:16:14.782 { 00:16:14.782 "trtype": "TCP" 00:16:14.782 } 00:16:14.782 ] 00:16:14.782 }, 00:16:14.782 { 00:16:14.782 "name": "nvmf_tgt_poll_group_001", 00:16:14.782 "admin_qpairs": 0, 00:16:14.782 "io_qpairs": 0, 00:16:14.782 "current_admin_qpairs": 0, 00:16:14.782 "current_io_qpairs": 0, 00:16:14.782 "pending_bdev_io": 0, 00:16:14.782 "completed_nvme_io": 0, 00:16:14.782 "transports": [ 00:16:14.782 { 00:16:14.782 "trtype": "TCP" 00:16:14.782 } 00:16:14.782 ] 00:16:14.782 }, 00:16:14.782 { 00:16:14.782 "name": "nvmf_tgt_poll_group_002", 00:16:14.782 "admin_qpairs": 0, 00:16:14.782 "io_qpairs": 0, 00:16:14.782 "current_admin_qpairs": 0, 00:16:14.782 "current_io_qpairs": 0, 00:16:14.782 "pending_bdev_io": 0, 00:16:14.782 "completed_nvme_io": 0, 00:16:14.782 "transports": [ 00:16:14.782 { 00:16:14.782 "trtype": "TCP" 00:16:14.782 } 00:16:14.782 ] 00:16:14.782 }, 00:16:14.782 { 00:16:14.782 "name": "nvmf_tgt_poll_group_003", 00:16:14.782 "admin_qpairs": 0, 00:16:14.782 "io_qpairs": 0, 00:16:14.782 "current_admin_qpairs": 0, 00:16:14.782 "current_io_qpairs": 0, 00:16:14.782 "pending_bdev_io": 0, 00:16:14.782 "completed_nvme_io": 0, 00:16:14.782 "transports": [ 00:16:14.782 { 00:16:14.782 "trtype": "TCP" 00:16:14.782 } 00:16:14.782 ] 00:16:14.782 } 00:16:14.782 ] 00:16:14.782 }' 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.782 23:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.782 Malloc1 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.782 [2024-11-19 23:40:49.047329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:14.782 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:14.782 [2024-11-19 23:40:49.069993] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:15.043 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:15.043 could not add new controller: failed to write to nvme-fabrics device 00:16:15.043 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:15.043 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:15.043 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:15.043 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:15.043 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.043 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.043 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.043 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.043 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:15.613 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.613 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:15.613 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.613 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:15.613 23:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:17.511 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:17.511 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:17.511 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.511 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:17.511 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.511 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:17.511 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.769 [2024-11-19 23:40:51.871302] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:17.769 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:17.769 could not add new controller: failed to write to nvme-fabrics device 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.769 23:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:18.335 23:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:18.335 23:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:18.335 23:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.335 23:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:18.335 23:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:20.234 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:20.234 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:20.234 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.234 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:20.234 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.234 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:20.234 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.492 [2024-11-19 23:40:54.659093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.492 23:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:21.058 23:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:21.058 23:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:21.058 23:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.058 23:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:21.316 23:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:23.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.213 [2024-11-19 23:40:57.489437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.213 23:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:24.146 23:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:24.146 23:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:24.146 23:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.146 23:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:24.146 23:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:26.044 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:26.044 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:26.044 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:26.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.045 [2024-11-19 23:41:00.319985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.045 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:26.979 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:26.979 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:26.979 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.979 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:26.979 23:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:28.875 23:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:28.875 23:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:28.875 23:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:28.875 23:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:28.875 23:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.875 23:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:28.875 23:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:28.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.875 [2024-11-19 23:41:03.076702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.875 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:29.805 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:29.805 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:29.806 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.806 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:29.806 23:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.704 [2024-11-19 23:41:05.945482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.704 23:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:32.637 23:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:32.637 23:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:32.637 23:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:32.637 23:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:32.637 23:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:34.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.535 [2024-11-19 23:41:08.822254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.535 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.536 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.536 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.536 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.536 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.536 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.536 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.536 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.536 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.536 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 [2024-11-19 23:41:08.870323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 [2024-11-19 23:41:08.918516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 [2024-11-19 23:41:08.966662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.795 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.795 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.795 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.795 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.795 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.795 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.796 [2024-11-19 23:41:09.014822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:34.796 "tick_rate": 2700000000, 00:16:34.796 "poll_groups": [ 00:16:34.796 { 00:16:34.796 "name": "nvmf_tgt_poll_group_000", 00:16:34.796 "admin_qpairs": 2, 00:16:34.796 "io_qpairs": 84, 00:16:34.796 "current_admin_qpairs": 0, 00:16:34.796 "current_io_qpairs": 0, 00:16:34.796 "pending_bdev_io": 0, 00:16:34.796 "completed_nvme_io": 183, 00:16:34.796 "transports": [ 00:16:34.796 { 00:16:34.796 "trtype": "TCP" 00:16:34.796 } 00:16:34.796 ] 00:16:34.796 }, 00:16:34.796 { 00:16:34.796 "name": "nvmf_tgt_poll_group_001", 00:16:34.796 "admin_qpairs": 2, 00:16:34.796 "io_qpairs": 84, 00:16:34.796 "current_admin_qpairs": 0, 00:16:34.796 "current_io_qpairs": 0, 00:16:34.796 "pending_bdev_io": 0, 00:16:34.796 "completed_nvme_io": 234, 00:16:34.796 "transports": [ 00:16:34.796 { 00:16:34.796 "trtype": "TCP" 00:16:34.796 } 00:16:34.796 ] 00:16:34.796 }, 00:16:34.796 { 00:16:34.796 "name": "nvmf_tgt_poll_group_002", 00:16:34.796 "admin_qpairs": 1, 00:16:34.796 "io_qpairs": 84, 00:16:34.796 "current_admin_qpairs": 0, 00:16:34.796 "current_io_qpairs": 0, 00:16:34.796 "pending_bdev_io": 0, 00:16:34.796 "completed_nvme_io": 183, 00:16:34.796 "transports": [ 00:16:34.796 { 00:16:34.796 "trtype": "TCP" 00:16:34.796 } 00:16:34.796 ] 00:16:34.796 }, 00:16:34.796 { 00:16:34.796 "name": "nvmf_tgt_poll_group_003", 00:16:34.796 "admin_qpairs": 2, 00:16:34.796 "io_qpairs": 84, 00:16:34.796 "current_admin_qpairs": 0, 00:16:34.796 "current_io_qpairs": 0, 00:16:34.796 "pending_bdev_io": 0, 00:16:34.796 "completed_nvme_io": 86, 00:16:34.796 "transports": [ 00:16:34.796 { 00:16:34.796 "trtype": "TCP" 00:16:34.796 } 00:16:34.796 ] 00:16:34.796 } 00:16:34.796 ] 00:16:34.796 }' 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:34.796 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:35.054 rmmod nvme_tcp 00:16:35.054 rmmod nvme_fabrics 00:16:35.054 rmmod nvme_keyring 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 147732 ']' 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 147732 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 147732 ']' 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 147732 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 147732 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 147732' 00:16:35.054 killing process with pid 147732 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 147732 00:16:35.054 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 147732 00:16:35.312 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:35.312 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:35.312 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:35.312 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:35.312 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:35.312 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:35.312 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:35.312 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.312 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:35.312 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.312 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.312 23:41:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.221 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:37.480 00:16:37.480 real 0m25.465s 00:16:37.480 user 1m22.880s 00:16:37.480 sys 0m4.094s 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.480 ************************************ 00:16:37.480 END TEST nvmf_rpc 00:16:37.480 ************************************ 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:37.480 ************************************ 00:16:37.480 START TEST nvmf_invalid 00:16:37.480 ************************************ 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:37.480 * Looking for test storage... 00:16:37.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.480 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:37.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.481 --rc genhtml_branch_coverage=1 00:16:37.481 --rc genhtml_function_coverage=1 00:16:37.481 --rc genhtml_legend=1 00:16:37.481 --rc geninfo_all_blocks=1 00:16:37.481 --rc geninfo_unexecuted_blocks=1 00:16:37.481 00:16:37.481 ' 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:37.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.481 --rc genhtml_branch_coverage=1 00:16:37.481 --rc genhtml_function_coverage=1 00:16:37.481 --rc genhtml_legend=1 00:16:37.481 --rc geninfo_all_blocks=1 00:16:37.481 --rc geninfo_unexecuted_blocks=1 00:16:37.481 00:16:37.481 ' 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:37.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.481 --rc genhtml_branch_coverage=1 00:16:37.481 --rc genhtml_function_coverage=1 00:16:37.481 --rc genhtml_legend=1 00:16:37.481 --rc geninfo_all_blocks=1 00:16:37.481 --rc geninfo_unexecuted_blocks=1 00:16:37.481 00:16:37.481 ' 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:37.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.481 --rc genhtml_branch_coverage=1 00:16:37.481 --rc genhtml_function_coverage=1 00:16:37.481 --rc genhtml_legend=1 00:16:37.481 --rc geninfo_all_blocks=1 00:16:37.481 --rc geninfo_unexecuted_blocks=1 00:16:37.481 00:16:37.481 ' 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:37.481 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.482 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.482 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.482 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:37.482 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:37.482 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:37.482 23:41:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:40.015 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:40.015 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:40.015 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:40.015 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:40.016 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:40.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:16:40.016 00:16:40.016 --- 10.0.0.2 ping statistics --- 00:16:40.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.016 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:16:40.016 00:16:40.016 --- 10.0.0.1 ping statistics --- 00:16:40.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.016 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=152333 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 152333 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 152333 ']' 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.016 23:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:40.016 [2024-11-19 23:41:13.920587] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:16:40.016 [2024-11-19 23:41:13.920658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.016 [2024-11-19 23:41:14.002509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.016 [2024-11-19 23:41:14.058813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.016 [2024-11-19 23:41:14.058880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.016 [2024-11-19 23:41:14.058897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.016 [2024-11-19 23:41:14.058912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.016 [2024-11-19 23:41:14.058923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.016 [2024-11-19 23:41:14.060665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.016 [2024-11-19 23:41:14.060720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.016 [2024-11-19 23:41:14.060775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.016 [2024-11-19 23:41:14.060778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.016 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.016 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:40.016 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.016 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.016 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:40.016 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.016 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:40.016 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4534 00:16:40.274 [2024-11-19 23:41:14.520132] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:40.274 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:40.274 { 00:16:40.274 "nqn": "nqn.2016-06.io.spdk:cnode4534", 00:16:40.274 "tgt_name": "foobar", 00:16:40.274 "method": "nvmf_create_subsystem", 00:16:40.274 "req_id": 1 00:16:40.274 } 00:16:40.274 Got JSON-RPC error response 00:16:40.274 response: 00:16:40.274 { 00:16:40.274 "code": -32603, 00:16:40.274 "message": "Unable to find target foobar" 00:16:40.274 }' 00:16:40.274 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:40.274 { 00:16:40.274 "nqn": "nqn.2016-06.io.spdk:cnode4534", 00:16:40.274 "tgt_name": "foobar", 00:16:40.274 "method": "nvmf_create_subsystem", 00:16:40.274 "req_id": 1 00:16:40.274 } 00:16:40.274 Got JSON-RPC error response 00:16:40.274 response: 00:16:40.274 { 00:16:40.274 "code": -32603, 00:16:40.274 "message": "Unable to find target foobar" 00:16:40.274 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:40.274 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:40.274 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15910 00:16:40.532 [2024-11-19 23:41:14.801139] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15910: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:40.532 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:40.532 { 00:16:40.532 "nqn": "nqn.2016-06.io.spdk:cnode15910", 00:16:40.532 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:40.532 "method": "nvmf_create_subsystem", 00:16:40.532 "req_id": 1 00:16:40.532 } 00:16:40.532 Got JSON-RPC error response 00:16:40.532 response: 00:16:40.532 { 00:16:40.532 "code": -32602, 00:16:40.532 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:40.532 }' 00:16:40.532 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:40.532 { 00:16:40.532 "nqn": "nqn.2016-06.io.spdk:cnode15910", 00:16:40.532 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:40.532 "method": "nvmf_create_subsystem", 00:16:40.532 "req_id": 1 00:16:40.532 } 00:16:40.532 Got JSON-RPC error response 00:16:40.532 response: 00:16:40.532 { 00:16:40.532 "code": -32602, 00:16:40.532 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:40.532 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:40.532 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:40.532 23:41:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28348 00:16:40.789 [2024-11-19 23:41:15.082096] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28348: invalid model number 'SPDK_Controller' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:41.048 { 00:16:41.048 "nqn": "nqn.2016-06.io.spdk:cnode28348", 00:16:41.048 "model_number": "SPDK_Controller\u001f", 00:16:41.048 "method": "nvmf_create_subsystem", 00:16:41.048 "req_id": 1 00:16:41.048 } 00:16:41.048 Got JSON-RPC error response 00:16:41.048 response: 00:16:41.048 { 00:16:41.048 "code": -32602, 00:16:41.048 "message": "Invalid MN SPDK_Controller\u001f" 00:16:41.048 }' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:41.048 { 00:16:41.048 "nqn": "nqn.2016-06.io.spdk:cnode28348", 00:16:41.048 "model_number": "SPDK_Controller\u001f", 00:16:41.048 "method": "nvmf_create_subsystem", 00:16:41.048 "req_id": 1 00:16:41.048 } 00:16:41.048 Got JSON-RPC error response 00:16:41.048 response: 00:16:41.048 { 00:16:41.048 "code": -32602, 00:16:41.048 "message": "Invalid MN SPDK_Controller\u001f" 00:16:41.048 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:41.048 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ l == \- ]] 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'lW'\''@8(ylqdXCGu%Md~Egg' 00:16:41.049 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'lW'\''@8(ylqdXCGu%Md~Egg' nqn.2016-06.io.spdk:cnode29224 00:16:41.308 [2024-11-19 23:41:15.435215] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29224: invalid serial number 'lW'@8(ylqdXCGu%Md~Egg' 00:16:41.308 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:41.308 { 00:16:41.308 "nqn": "nqn.2016-06.io.spdk:cnode29224", 00:16:41.308 "serial_number": "lW'\''@8(ylqdXCGu%Md~Egg", 00:16:41.308 "method": "nvmf_create_subsystem", 00:16:41.308 "req_id": 1 00:16:41.308 } 00:16:41.308 Got JSON-RPC error response 00:16:41.308 response: 00:16:41.308 { 00:16:41.308 "code": -32602, 00:16:41.308 "message": "Invalid SN lW'\''@8(ylqdXCGu%Md~Egg" 00:16:41.308 }' 00:16:41.308 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:41.308 { 00:16:41.308 "nqn": "nqn.2016-06.io.spdk:cnode29224", 00:16:41.308 "serial_number": "lW'@8(ylqdXCGu%Md~Egg", 00:16:41.308 "method": "nvmf_create_subsystem", 00:16:41.308 "req_id": 1 00:16:41.308 } 00:16:41.308 Got JSON-RPC error response 00:16:41.308 response: 00:16:41.308 { 00:16:41.308 "code": -32602, 00:16:41.308 "message": "Invalid SN lW'@8(ylqdXCGu%Md~Egg" 00:16:41.308 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:41.308 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:41.308 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:41.308 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:41.308 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:41.308 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:41.308 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:41.308 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.309 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\hxz3Ae6}])VFPvQ>p/S{0+Ce[@ik'\''dkt!&gu"% k' 00:16:41.310 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '\hxz3Ae6}])VFPvQ>p/S{0+Ce[@ik'\''dkt!&gu"% k' nqn.2016-06.io.spdk:cnode19690 00:16:41.568 [2024-11-19 23:41:15.864641] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19690: invalid model number '\hxz3Ae6}])VFPvQ>p/S{0+Ce[@ik'dkt!&gu"% k' 00:16:41.825 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:41.825 { 00:16:41.825 "nqn": "nqn.2016-06.io.spdk:cnode19690", 00:16:41.825 "model_number": "\\hxz3Ae6}])VFPvQ>p/S{0+Ce[@ik'\''dkt!&gu\"% k", 00:16:41.825 "method": "nvmf_create_subsystem", 00:16:41.825 "req_id": 1 00:16:41.825 } 00:16:41.825 Got JSON-RPC error response 00:16:41.825 response: 00:16:41.825 { 00:16:41.825 "code": -32602, 00:16:41.825 "message": "Invalid MN \\hxz3Ae6}])VFPvQ>p/S{0+Ce[@ik'\''dkt!&gu\"% k" 00:16:41.825 }' 00:16:41.825 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:41.825 { 00:16:41.825 "nqn": "nqn.2016-06.io.spdk:cnode19690", 00:16:41.825 "model_number": "\\hxz3Ae6}])VFPvQ>p/S{0+Ce[@ik'dkt!&gu\"% k", 00:16:41.825 "method": "nvmf_create_subsystem", 00:16:41.825 "req_id": 1 00:16:41.825 } 00:16:41.825 Got JSON-RPC error response 00:16:41.825 response: 00:16:41.825 { 00:16:41.825 "code": -32602, 00:16:41.825 "message": "Invalid MN \\hxz3Ae6}])VFPvQ>p/S{0+Ce[@ik'dkt!&gu\"% k" 00:16:41.825 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:41.825 23:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:42.083 [2024-11-19 23:41:16.149653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.083 23:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:42.341 23:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:42.341 23:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:42.341 23:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:42.341 23:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:42.341 23:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:42.598 [2024-11-19 23:41:16.695468] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:42.598 23:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:42.598 { 00:16:42.598 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:42.598 "listen_address": { 00:16:42.598 "trtype": "tcp", 00:16:42.598 "traddr": "", 00:16:42.598 "trsvcid": "4421" 00:16:42.598 }, 00:16:42.598 "method": "nvmf_subsystem_remove_listener", 00:16:42.598 "req_id": 1 00:16:42.598 } 00:16:42.598 Got JSON-RPC error response 00:16:42.598 response: 00:16:42.598 { 00:16:42.598 "code": -32602, 00:16:42.598 "message": "Invalid parameters" 00:16:42.598 }' 00:16:42.598 23:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:42.598 { 00:16:42.598 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:42.598 "listen_address": { 00:16:42.598 "trtype": "tcp", 00:16:42.598 "traddr": "", 00:16:42.598 "trsvcid": "4421" 00:16:42.598 }, 00:16:42.598 "method": "nvmf_subsystem_remove_listener", 00:16:42.598 "req_id": 1 00:16:42.598 } 00:16:42.598 Got JSON-RPC error response 00:16:42.598 response: 00:16:42.598 { 00:16:42.598 "code": -32602, 00:16:42.598 "message": "Invalid parameters" 00:16:42.598 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:42.598 23:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13156 -i 0 00:16:42.856 [2024-11-19 23:41:16.964289] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13156: invalid cntlid range [0-65519] 00:16:42.856 23:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:42.856 { 00:16:42.856 "nqn": "nqn.2016-06.io.spdk:cnode13156", 00:16:42.856 "min_cntlid": 0, 00:16:42.856 "method": "nvmf_create_subsystem", 00:16:42.856 "req_id": 1 00:16:42.856 } 00:16:42.856 Got JSON-RPC error response 00:16:42.856 response: 00:16:42.856 { 00:16:42.856 "code": -32602, 00:16:42.856 "message": "Invalid cntlid range [0-65519]" 00:16:42.856 }' 00:16:42.856 23:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:42.856 { 00:16:42.856 "nqn": "nqn.2016-06.io.spdk:cnode13156", 00:16:42.856 "min_cntlid": 0, 00:16:42.856 "method": "nvmf_create_subsystem", 00:16:42.856 "req_id": 1 00:16:42.856 } 00:16:42.856 Got JSON-RPC error response 00:16:42.856 response: 00:16:42.856 { 00:16:42.856 "code": -32602, 00:16:42.856 "message": "Invalid cntlid range [0-65519]" 00:16:42.856 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:42.856 23:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16209 -i 65520 00:16:43.114 [2024-11-19 23:41:17.241217] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16209: invalid cntlid range [65520-65519] 00:16:43.114 23:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:43.114 { 00:16:43.114 "nqn": "nqn.2016-06.io.spdk:cnode16209", 00:16:43.114 "min_cntlid": 65520, 00:16:43.114 "method": "nvmf_create_subsystem", 00:16:43.114 "req_id": 1 00:16:43.114 } 00:16:43.114 Got JSON-RPC error response 00:16:43.114 response: 00:16:43.114 { 00:16:43.114 "code": -32602, 00:16:43.114 "message": "Invalid cntlid range [65520-65519]" 00:16:43.114 }' 00:16:43.114 23:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:43.114 { 00:16:43.114 "nqn": "nqn.2016-06.io.spdk:cnode16209", 00:16:43.114 "min_cntlid": 65520, 00:16:43.114 "method": "nvmf_create_subsystem", 00:16:43.114 "req_id": 1 00:16:43.114 } 00:16:43.114 Got JSON-RPC error response 00:16:43.114 response: 00:16:43.114 { 00:16:43.114 "code": -32602, 00:16:43.114 "message": "Invalid cntlid range [65520-65519]" 00:16:43.114 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:43.114 23:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16481 -I 0 00:16:43.371 [2024-11-19 23:41:17.518105] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16481: invalid cntlid range [1-0] 00:16:43.371 23:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:43.371 { 00:16:43.371 "nqn": "nqn.2016-06.io.spdk:cnode16481", 00:16:43.371 "max_cntlid": 0, 00:16:43.371 "method": "nvmf_create_subsystem", 00:16:43.371 "req_id": 1 00:16:43.371 } 00:16:43.371 Got JSON-RPC error response 00:16:43.371 response: 00:16:43.371 { 00:16:43.371 "code": -32602, 00:16:43.371 "message": "Invalid cntlid range [1-0]" 00:16:43.371 }' 00:16:43.371 23:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:43.371 { 00:16:43.371 "nqn": "nqn.2016-06.io.spdk:cnode16481", 00:16:43.371 "max_cntlid": 0, 00:16:43.371 "method": "nvmf_create_subsystem", 00:16:43.371 "req_id": 1 00:16:43.371 } 00:16:43.371 Got JSON-RPC error response 00:16:43.371 response: 00:16:43.371 { 00:16:43.371 "code": -32602, 00:16:43.371 "message": "Invalid cntlid range [1-0]" 00:16:43.371 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:43.371 23:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3779 -I 65520 00:16:43.629 [2024-11-19 23:41:17.782999] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3779: invalid cntlid range [1-65520] 00:16:43.629 23:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:43.629 { 00:16:43.629 "nqn": "nqn.2016-06.io.spdk:cnode3779", 00:16:43.629 "max_cntlid": 65520, 00:16:43.629 "method": "nvmf_create_subsystem", 00:16:43.629 "req_id": 1 00:16:43.629 } 00:16:43.629 Got JSON-RPC error response 00:16:43.629 response: 00:16:43.629 { 00:16:43.629 "code": -32602, 00:16:43.629 "message": "Invalid cntlid range [1-65520]" 00:16:43.629 }' 00:16:43.629 23:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:43.629 { 00:16:43.629 "nqn": "nqn.2016-06.io.spdk:cnode3779", 00:16:43.629 "max_cntlid": 65520, 00:16:43.629 "method": "nvmf_create_subsystem", 00:16:43.629 "req_id": 1 00:16:43.629 } 00:16:43.629 Got JSON-RPC error response 00:16:43.629 response: 00:16:43.629 { 00:16:43.629 "code": -32602, 00:16:43.629 "message": "Invalid cntlid range [1-65520]" 00:16:43.629 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:43.629 23:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5277 -i 6 -I 5 00:16:43.887 [2024-11-19 23:41:18.047903] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5277: invalid cntlid range [6-5] 00:16:43.887 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:43.887 { 00:16:43.887 "nqn": "nqn.2016-06.io.spdk:cnode5277", 00:16:43.887 "min_cntlid": 6, 00:16:43.887 "max_cntlid": 5, 00:16:43.887 "method": "nvmf_create_subsystem", 00:16:43.887 "req_id": 1 00:16:43.887 } 00:16:43.887 Got JSON-RPC error response 00:16:43.887 response: 00:16:43.887 { 00:16:43.887 "code": -32602, 00:16:43.887 "message": "Invalid cntlid range [6-5]" 00:16:43.888 }' 00:16:43.888 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:43.888 { 00:16:43.888 "nqn": "nqn.2016-06.io.spdk:cnode5277", 00:16:43.888 "min_cntlid": 6, 00:16:43.888 "max_cntlid": 5, 00:16:43.888 "method": "nvmf_create_subsystem", 00:16:43.888 "req_id": 1 00:16:43.888 } 00:16:43.888 Got JSON-RPC error response 00:16:43.888 response: 00:16:43.888 { 00:16:43.888 "code": -32602, 00:16:43.888 "message": "Invalid cntlid range [6-5]" 00:16:43.888 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:43.888 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:44.146 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:44.146 { 00:16:44.146 "name": "foobar", 00:16:44.146 "method": "nvmf_delete_target", 00:16:44.146 "req_id": 1 00:16:44.146 } 00:16:44.146 Got JSON-RPC error response 00:16:44.146 response: 00:16:44.146 { 00:16:44.146 "code": -32602, 00:16:44.146 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:44.146 }' 00:16:44.146 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:44.146 { 00:16:44.146 "name": "foobar", 00:16:44.146 "method": "nvmf_delete_target", 00:16:44.146 "req_id": 1 00:16:44.146 } 00:16:44.146 Got JSON-RPC error response 00:16:44.146 response: 00:16:44.146 { 00:16:44.146 "code": -32602, 00:16:44.146 "message": "The specified target doesn't exist, cannot delete it." 00:16:44.146 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:44.146 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:44.146 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:44.147 rmmod nvme_tcp 00:16:44.147 rmmod nvme_fabrics 00:16:44.147 rmmod nvme_keyring 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 152333 ']' 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 152333 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 152333 ']' 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 152333 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152333 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152333' 00:16:44.147 killing process with pid 152333 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 152333 00:16:44.147 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 152333 00:16:44.405 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:44.405 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:44.405 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:44.405 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:44.405 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:44.405 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:44.405 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:44.405 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:44.405 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:44.405 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.405 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.405 23:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.309 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:46.309 00:16:46.309 real 0m8.970s 00:16:46.309 user 0m21.877s 00:16:46.309 sys 0m2.433s 00:16:46.309 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.309 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:46.309 ************************************ 00:16:46.309 END TEST nvmf_invalid 00:16:46.309 ************************************ 00:16:46.309 23:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:46.309 23:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:46.309 23:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.309 23:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.309 ************************************ 00:16:46.309 START TEST nvmf_connect_stress 00:16:46.309 ************************************ 00:16:46.309 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:46.568 * Looking for test storage... 00:16:46.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.568 --rc genhtml_branch_coverage=1 00:16:46.568 --rc genhtml_function_coverage=1 00:16:46.568 --rc genhtml_legend=1 00:16:46.568 --rc geninfo_all_blocks=1 00:16:46.568 --rc geninfo_unexecuted_blocks=1 00:16:46.568 00:16:46.568 ' 00:16:46.568 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.568 --rc genhtml_branch_coverage=1 00:16:46.568 --rc genhtml_function_coverage=1 00:16:46.569 --rc genhtml_legend=1 00:16:46.569 --rc geninfo_all_blocks=1 00:16:46.569 --rc geninfo_unexecuted_blocks=1 00:16:46.569 00:16:46.569 ' 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:46.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.569 --rc genhtml_branch_coverage=1 00:16:46.569 --rc genhtml_function_coverage=1 00:16:46.569 --rc genhtml_legend=1 00:16:46.569 --rc geninfo_all_blocks=1 00:16:46.569 --rc geninfo_unexecuted_blocks=1 00:16:46.569 00:16:46.569 ' 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:46.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.569 --rc genhtml_branch_coverage=1 00:16:46.569 --rc genhtml_function_coverage=1 00:16:46.569 --rc genhtml_legend=1 00:16:46.569 --rc geninfo_all_blocks=1 00:16:46.569 --rc geninfo_unexecuted_blocks=1 00:16:46.569 00:16:46.569 ' 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:46.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:46.569 23:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:48.543 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:48.543 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:48.543 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:48.544 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:48.544 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.544 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:48.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:16:48.806 00:16:48.806 --- 10.0.0.2 ping statistics --- 00:16:48.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.806 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:16:48.806 00:16:48.806 --- 10.0.0.1 ping statistics --- 00:16:48.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.806 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=154976 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 154976 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 154976 ']' 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.806 23:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.806 [2024-11-19 23:41:23.007753] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:16:48.806 [2024-11-19 23:41:23.007847] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.806 [2024-11-19 23:41:23.088156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:49.065 [2024-11-19 23:41:23.139166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.065 [2024-11-19 23:41:23.139226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.065 [2024-11-19 23:41:23.139251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.065 [2024-11-19 23:41:23.139265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.065 [2024-11-19 23:41:23.139276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.065 [2024-11-19 23:41:23.140850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.065 [2024-11-19 23:41:23.140905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.065 [2024-11-19 23:41:23.140909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.065 [2024-11-19 23:41:23.291620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.065 [2024-11-19 23:41:23.309005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.065 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.066 NULL1 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=155004 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.066 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.631 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.632 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:49.632 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.632 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.632 23:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.889 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.889 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:49.889 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.889 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.889 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.146 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.146 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:50.146 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.146 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.146 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.403 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.403 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:50.403 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.403 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.403 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.967 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.967 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:50.967 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.967 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.967 23:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.225 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.225 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:51.225 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.225 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.225 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.482 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.482 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:51.482 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.482 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.482 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.740 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.740 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:51.740 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.740 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.740 23:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.998 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.998 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:51.998 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.998 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.998 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.565 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.565 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:52.565 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.565 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.565 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.823 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.823 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:52.823 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.823 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.823 23:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.081 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.081 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:53.081 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.081 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.081 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.339 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.339 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:53.339 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.339 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.339 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.597 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.597 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:53.597 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.597 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.597 23:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.162 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.162 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:54.162 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.162 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.162 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.420 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.420 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:54.420 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.420 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.420 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.678 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.678 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:54.678 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.678 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.678 23:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.936 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.936 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:54.936 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.936 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.936 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.193 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.193 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:55.193 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.193 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.193 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.757 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.757 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:55.757 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.757 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.757 23:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.015 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.015 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:56.015 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.015 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.015 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.273 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.273 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:56.273 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.273 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.273 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.531 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.531 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:56.531 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.531 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.531 23:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.789 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.789 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:56.789 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.789 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.789 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.353 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.353 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:57.353 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.353 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.353 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.612 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.612 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:57.612 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.612 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.612 23:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.870 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.870 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:57.870 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.870 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.870 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.127 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.127 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:58.127 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.127 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.127 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.384 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.384 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:58.384 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.384 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.384 23:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.949 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.949 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:58.949 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.949 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.949 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.207 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.207 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:59.207 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.207 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.207 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:59.207 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 155004 00:16:59.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (155004) - No such process 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 155004 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:59.466 rmmod nvme_tcp 00:16:59.466 rmmod nvme_fabrics 00:16:59.466 rmmod nvme_keyring 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 154976 ']' 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 154976 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 154976 ']' 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 154976 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154976 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154976' 00:16:59.466 killing process with pid 154976 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 154976 00:16:59.466 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 154976 00:16:59.725 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:59.725 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:59.725 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:59.725 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:59.725 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:59.725 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:59.725 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:59.725 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:59.725 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:59.725 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.725 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.725 23:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:02.259 00:17:02.259 real 0m15.412s 00:17:02.259 user 0m38.314s 00:17:02.259 sys 0m6.102s 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:02.259 ************************************ 00:17:02.259 END TEST nvmf_connect_stress 00:17:02.259 ************************************ 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:02.259 ************************************ 00:17:02.259 START TEST nvmf_fused_ordering 00:17:02.259 ************************************ 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:02.259 * Looking for test storage... 00:17:02.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.259 --rc genhtml_branch_coverage=1 00:17:02.259 --rc genhtml_function_coverage=1 00:17:02.259 --rc genhtml_legend=1 00:17:02.259 --rc geninfo_all_blocks=1 00:17:02.259 --rc geninfo_unexecuted_blocks=1 00:17:02.259 00:17:02.259 ' 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.259 --rc genhtml_branch_coverage=1 00:17:02.259 --rc genhtml_function_coverage=1 00:17:02.259 --rc genhtml_legend=1 00:17:02.259 --rc geninfo_all_blocks=1 00:17:02.259 --rc geninfo_unexecuted_blocks=1 00:17:02.259 00:17:02.259 ' 00:17:02.259 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:02.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.259 --rc genhtml_branch_coverage=1 00:17:02.260 --rc genhtml_function_coverage=1 00:17:02.260 --rc genhtml_legend=1 00:17:02.260 --rc geninfo_all_blocks=1 00:17:02.260 --rc geninfo_unexecuted_blocks=1 00:17:02.260 00:17:02.260 ' 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:02.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.260 --rc genhtml_branch_coverage=1 00:17:02.260 --rc genhtml_function_coverage=1 00:17:02.260 --rc genhtml_legend=1 00:17:02.260 --rc geninfo_all_blocks=1 00:17:02.260 --rc geninfo_unexecuted_blocks=1 00:17:02.260 00:17:02.260 ' 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:02.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:02.260 23:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:04.165 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:04.165 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:04.165 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:04.165 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.165 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:04.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:17:04.166 00:17:04.166 --- 10.0.0.2 ping statistics --- 00:17:04.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.166 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:17:04.166 00:17:04.166 --- 10.0.0.1 ping statistics --- 00:17:04.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.166 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=158152 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 158152 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 158152 ']' 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.166 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.166 [2024-11-19 23:41:38.472908] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:17:04.166 [2024-11-19 23:41:38.473007] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.425 [2024-11-19 23:41:38.552789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.425 [2024-11-19 23:41:38.603209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.425 [2024-11-19 23:41:38.603261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.425 [2024-11-19 23:41:38.603286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.425 [2024-11-19 23:41:38.603300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.425 [2024-11-19 23:41:38.603312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.425 [2024-11-19 23:41:38.603959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.425 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.425 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:04.425 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:04.425 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:04.425 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.684 [2024-11-19 23:41:38.752092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.684 [2024-11-19 23:41:38.768351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.684 NULL1 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.684 23:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:04.684 [2024-11-19 23:41:38.813964] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:17:04.684 [2024-11-19 23:41:38.814009] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158292 ] 00:17:04.943 Attached to nqn.2016-06.io.spdk:cnode1 00:17:04.943 Namespace ID: 1 size: 1GB 00:17:04.943 fused_ordering(0) 00:17:04.943 fused_ordering(1) 00:17:04.943 fused_ordering(2) 00:17:04.943 fused_ordering(3) 00:17:04.943 fused_ordering(4) 00:17:04.943 fused_ordering(5) 00:17:04.943 fused_ordering(6) 00:17:04.943 fused_ordering(7) 00:17:04.943 fused_ordering(8) 00:17:04.943 fused_ordering(9) 00:17:04.943 fused_ordering(10) 00:17:04.943 fused_ordering(11) 00:17:04.943 fused_ordering(12) 00:17:04.943 fused_ordering(13) 00:17:04.943 fused_ordering(14) 00:17:04.943 fused_ordering(15) 00:17:04.943 fused_ordering(16) 00:17:04.943 fused_ordering(17) 00:17:04.943 fused_ordering(18) 00:17:04.943 fused_ordering(19) 00:17:04.943 fused_ordering(20) 00:17:04.943 fused_ordering(21) 00:17:04.943 fused_ordering(22) 00:17:04.943 fused_ordering(23) 00:17:04.943 fused_ordering(24) 00:17:04.943 fused_ordering(25) 00:17:04.943 fused_ordering(26) 00:17:04.943 fused_ordering(27) 00:17:04.943 fused_ordering(28) 00:17:04.943 fused_ordering(29) 00:17:04.943 fused_ordering(30) 00:17:04.943 fused_ordering(31) 00:17:04.943 fused_ordering(32) 00:17:04.943 fused_ordering(33) 00:17:04.943 fused_ordering(34) 00:17:04.943 fused_ordering(35) 00:17:04.943 fused_ordering(36) 00:17:04.943 fused_ordering(37) 00:17:04.943 fused_ordering(38) 00:17:04.943 fused_ordering(39) 00:17:04.943 fused_ordering(40) 00:17:04.943 fused_ordering(41) 00:17:04.943 fused_ordering(42) 00:17:04.943 fused_ordering(43) 00:17:04.943 fused_ordering(44) 00:17:04.943 fused_ordering(45) 00:17:04.943 fused_ordering(46) 00:17:04.943 fused_ordering(47) 00:17:04.943 fused_ordering(48) 00:17:04.943 fused_ordering(49) 00:17:04.943 fused_ordering(50) 00:17:04.943 fused_ordering(51) 00:17:04.943 fused_ordering(52) 00:17:04.943 fused_ordering(53) 00:17:04.943 fused_ordering(54) 00:17:04.943 fused_ordering(55) 00:17:04.943 fused_ordering(56) 00:17:04.943 fused_ordering(57) 00:17:04.943 fused_ordering(58) 00:17:04.943 fused_ordering(59) 00:17:04.943 fused_ordering(60) 00:17:04.943 fused_ordering(61) 00:17:04.943 fused_ordering(62) 00:17:04.943 fused_ordering(63) 00:17:04.943 fused_ordering(64) 00:17:04.943 fused_ordering(65) 00:17:04.943 fused_ordering(66) 00:17:04.943 fused_ordering(67) 00:17:04.943 fused_ordering(68) 00:17:04.943 fused_ordering(69) 00:17:04.943 fused_ordering(70) 00:17:04.943 fused_ordering(71) 00:17:04.943 fused_ordering(72) 00:17:04.943 fused_ordering(73) 00:17:04.943 fused_ordering(74) 00:17:04.943 fused_ordering(75) 00:17:04.943 fused_ordering(76) 00:17:04.943 fused_ordering(77) 00:17:04.943 fused_ordering(78) 00:17:04.943 fused_ordering(79) 00:17:04.943 fused_ordering(80) 00:17:04.943 fused_ordering(81) 00:17:04.943 fused_ordering(82) 00:17:04.943 fused_ordering(83) 00:17:04.943 fused_ordering(84) 00:17:04.943 fused_ordering(85) 00:17:04.943 fused_ordering(86) 00:17:04.943 fused_ordering(87) 00:17:04.943 fused_ordering(88) 00:17:04.943 fused_ordering(89) 00:17:04.943 fused_ordering(90) 00:17:04.943 fused_ordering(91) 00:17:04.943 fused_ordering(92) 00:17:04.943 fused_ordering(93) 00:17:04.943 fused_ordering(94) 00:17:04.943 fused_ordering(95) 00:17:04.943 fused_ordering(96) 00:17:04.943 fused_ordering(97) 00:17:04.943 fused_ordering(98) 00:17:04.943 fused_ordering(99) 00:17:04.943 fused_ordering(100) 00:17:04.943 fused_ordering(101) 00:17:04.943 fused_ordering(102) 00:17:04.943 fused_ordering(103) 00:17:04.943 fused_ordering(104) 00:17:04.943 fused_ordering(105) 00:17:04.943 fused_ordering(106) 00:17:04.943 fused_ordering(107) 00:17:04.943 fused_ordering(108) 00:17:04.943 fused_ordering(109) 00:17:04.943 fused_ordering(110) 00:17:04.943 fused_ordering(111) 00:17:04.943 fused_ordering(112) 00:17:04.943 fused_ordering(113) 00:17:04.943 fused_ordering(114) 00:17:04.943 fused_ordering(115) 00:17:04.943 fused_ordering(116) 00:17:04.943 fused_ordering(117) 00:17:04.943 fused_ordering(118) 00:17:04.943 fused_ordering(119) 00:17:04.943 fused_ordering(120) 00:17:04.943 fused_ordering(121) 00:17:04.943 fused_ordering(122) 00:17:04.943 fused_ordering(123) 00:17:04.943 fused_ordering(124) 00:17:04.943 fused_ordering(125) 00:17:04.943 fused_ordering(126) 00:17:04.943 fused_ordering(127) 00:17:04.943 fused_ordering(128) 00:17:04.943 fused_ordering(129) 00:17:04.943 fused_ordering(130) 00:17:04.943 fused_ordering(131) 00:17:04.943 fused_ordering(132) 00:17:04.943 fused_ordering(133) 00:17:04.943 fused_ordering(134) 00:17:04.943 fused_ordering(135) 00:17:04.943 fused_ordering(136) 00:17:04.943 fused_ordering(137) 00:17:04.943 fused_ordering(138) 00:17:04.943 fused_ordering(139) 00:17:04.943 fused_ordering(140) 00:17:04.943 fused_ordering(141) 00:17:04.943 fused_ordering(142) 00:17:04.943 fused_ordering(143) 00:17:04.943 fused_ordering(144) 00:17:04.943 fused_ordering(145) 00:17:04.943 fused_ordering(146) 00:17:04.943 fused_ordering(147) 00:17:04.943 fused_ordering(148) 00:17:04.943 fused_ordering(149) 00:17:04.943 fused_ordering(150) 00:17:04.943 fused_ordering(151) 00:17:04.943 fused_ordering(152) 00:17:04.943 fused_ordering(153) 00:17:04.943 fused_ordering(154) 00:17:04.943 fused_ordering(155) 00:17:04.943 fused_ordering(156) 00:17:04.943 fused_ordering(157) 00:17:04.943 fused_ordering(158) 00:17:04.943 fused_ordering(159) 00:17:04.943 fused_ordering(160) 00:17:04.943 fused_ordering(161) 00:17:04.943 fused_ordering(162) 00:17:04.943 fused_ordering(163) 00:17:04.943 fused_ordering(164) 00:17:04.943 fused_ordering(165) 00:17:04.943 fused_ordering(166) 00:17:04.943 fused_ordering(167) 00:17:04.943 fused_ordering(168) 00:17:04.943 fused_ordering(169) 00:17:04.943 fused_ordering(170) 00:17:04.943 fused_ordering(171) 00:17:04.943 fused_ordering(172) 00:17:04.943 fused_ordering(173) 00:17:04.943 fused_ordering(174) 00:17:04.943 fused_ordering(175) 00:17:04.943 fused_ordering(176) 00:17:04.943 fused_ordering(177) 00:17:04.943 fused_ordering(178) 00:17:04.943 fused_ordering(179) 00:17:04.943 fused_ordering(180) 00:17:04.943 fused_ordering(181) 00:17:04.943 fused_ordering(182) 00:17:04.943 fused_ordering(183) 00:17:04.943 fused_ordering(184) 00:17:04.943 fused_ordering(185) 00:17:04.943 fused_ordering(186) 00:17:04.943 fused_ordering(187) 00:17:04.943 fused_ordering(188) 00:17:04.943 fused_ordering(189) 00:17:04.943 fused_ordering(190) 00:17:04.943 fused_ordering(191) 00:17:04.943 fused_ordering(192) 00:17:04.943 fused_ordering(193) 00:17:04.943 fused_ordering(194) 00:17:04.943 fused_ordering(195) 00:17:04.943 fused_ordering(196) 00:17:04.943 fused_ordering(197) 00:17:04.943 fused_ordering(198) 00:17:04.943 fused_ordering(199) 00:17:04.943 fused_ordering(200) 00:17:04.943 fused_ordering(201) 00:17:04.943 fused_ordering(202) 00:17:04.943 fused_ordering(203) 00:17:04.943 fused_ordering(204) 00:17:04.943 fused_ordering(205) 00:17:05.201 fused_ordering(206) 00:17:05.201 fused_ordering(207) 00:17:05.201 fused_ordering(208) 00:17:05.201 fused_ordering(209) 00:17:05.201 fused_ordering(210) 00:17:05.201 fused_ordering(211) 00:17:05.201 fused_ordering(212) 00:17:05.201 fused_ordering(213) 00:17:05.201 fused_ordering(214) 00:17:05.201 fused_ordering(215) 00:17:05.201 fused_ordering(216) 00:17:05.201 fused_ordering(217) 00:17:05.201 fused_ordering(218) 00:17:05.202 fused_ordering(219) 00:17:05.202 fused_ordering(220) 00:17:05.202 fused_ordering(221) 00:17:05.202 fused_ordering(222) 00:17:05.202 fused_ordering(223) 00:17:05.202 fused_ordering(224) 00:17:05.202 fused_ordering(225) 00:17:05.202 fused_ordering(226) 00:17:05.202 fused_ordering(227) 00:17:05.202 fused_ordering(228) 00:17:05.202 fused_ordering(229) 00:17:05.202 fused_ordering(230) 00:17:05.202 fused_ordering(231) 00:17:05.202 fused_ordering(232) 00:17:05.202 fused_ordering(233) 00:17:05.202 fused_ordering(234) 00:17:05.202 fused_ordering(235) 00:17:05.202 fused_ordering(236) 00:17:05.202 fused_ordering(237) 00:17:05.202 fused_ordering(238) 00:17:05.202 fused_ordering(239) 00:17:05.202 fused_ordering(240) 00:17:05.202 fused_ordering(241) 00:17:05.202 fused_ordering(242) 00:17:05.202 fused_ordering(243) 00:17:05.202 fused_ordering(244) 00:17:05.202 fused_ordering(245) 00:17:05.202 fused_ordering(246) 00:17:05.202 fused_ordering(247) 00:17:05.202 fused_ordering(248) 00:17:05.202 fused_ordering(249) 00:17:05.202 fused_ordering(250) 00:17:05.202 fused_ordering(251) 00:17:05.202 fused_ordering(252) 00:17:05.202 fused_ordering(253) 00:17:05.202 fused_ordering(254) 00:17:05.202 fused_ordering(255) 00:17:05.202 fused_ordering(256) 00:17:05.202 fused_ordering(257) 00:17:05.202 fused_ordering(258) 00:17:05.202 fused_ordering(259) 00:17:05.202 fused_ordering(260) 00:17:05.202 fused_ordering(261) 00:17:05.202 fused_ordering(262) 00:17:05.202 fused_ordering(263) 00:17:05.202 fused_ordering(264) 00:17:05.202 fused_ordering(265) 00:17:05.202 fused_ordering(266) 00:17:05.202 fused_ordering(267) 00:17:05.202 fused_ordering(268) 00:17:05.202 fused_ordering(269) 00:17:05.202 fused_ordering(270) 00:17:05.202 fused_ordering(271) 00:17:05.202 fused_ordering(272) 00:17:05.202 fused_ordering(273) 00:17:05.202 fused_ordering(274) 00:17:05.202 fused_ordering(275) 00:17:05.202 fused_ordering(276) 00:17:05.202 fused_ordering(277) 00:17:05.202 fused_ordering(278) 00:17:05.202 fused_ordering(279) 00:17:05.202 fused_ordering(280) 00:17:05.202 fused_ordering(281) 00:17:05.202 fused_ordering(282) 00:17:05.202 fused_ordering(283) 00:17:05.202 fused_ordering(284) 00:17:05.202 fused_ordering(285) 00:17:05.202 fused_ordering(286) 00:17:05.202 fused_ordering(287) 00:17:05.202 fused_ordering(288) 00:17:05.202 fused_ordering(289) 00:17:05.202 fused_ordering(290) 00:17:05.202 fused_ordering(291) 00:17:05.202 fused_ordering(292) 00:17:05.202 fused_ordering(293) 00:17:05.202 fused_ordering(294) 00:17:05.202 fused_ordering(295) 00:17:05.202 fused_ordering(296) 00:17:05.202 fused_ordering(297) 00:17:05.202 fused_ordering(298) 00:17:05.202 fused_ordering(299) 00:17:05.202 fused_ordering(300) 00:17:05.202 fused_ordering(301) 00:17:05.202 fused_ordering(302) 00:17:05.202 fused_ordering(303) 00:17:05.202 fused_ordering(304) 00:17:05.202 fused_ordering(305) 00:17:05.202 fused_ordering(306) 00:17:05.202 fused_ordering(307) 00:17:05.202 fused_ordering(308) 00:17:05.202 fused_ordering(309) 00:17:05.202 fused_ordering(310) 00:17:05.202 fused_ordering(311) 00:17:05.202 fused_ordering(312) 00:17:05.202 fused_ordering(313) 00:17:05.202 fused_ordering(314) 00:17:05.202 fused_ordering(315) 00:17:05.202 fused_ordering(316) 00:17:05.202 fused_ordering(317) 00:17:05.202 fused_ordering(318) 00:17:05.202 fused_ordering(319) 00:17:05.202 fused_ordering(320) 00:17:05.202 fused_ordering(321) 00:17:05.202 fused_ordering(322) 00:17:05.202 fused_ordering(323) 00:17:05.202 fused_ordering(324) 00:17:05.202 fused_ordering(325) 00:17:05.202 fused_ordering(326) 00:17:05.202 fused_ordering(327) 00:17:05.202 fused_ordering(328) 00:17:05.202 fused_ordering(329) 00:17:05.202 fused_ordering(330) 00:17:05.202 fused_ordering(331) 00:17:05.202 fused_ordering(332) 00:17:05.202 fused_ordering(333) 00:17:05.202 fused_ordering(334) 00:17:05.202 fused_ordering(335) 00:17:05.202 fused_ordering(336) 00:17:05.202 fused_ordering(337) 00:17:05.202 fused_ordering(338) 00:17:05.202 fused_ordering(339) 00:17:05.202 fused_ordering(340) 00:17:05.202 fused_ordering(341) 00:17:05.202 fused_ordering(342) 00:17:05.202 fused_ordering(343) 00:17:05.202 fused_ordering(344) 00:17:05.202 fused_ordering(345) 00:17:05.202 fused_ordering(346) 00:17:05.202 fused_ordering(347) 00:17:05.202 fused_ordering(348) 00:17:05.202 fused_ordering(349) 00:17:05.202 fused_ordering(350) 00:17:05.202 fused_ordering(351) 00:17:05.202 fused_ordering(352) 00:17:05.202 fused_ordering(353) 00:17:05.202 fused_ordering(354) 00:17:05.202 fused_ordering(355) 00:17:05.202 fused_ordering(356) 00:17:05.202 fused_ordering(357) 00:17:05.202 fused_ordering(358) 00:17:05.202 fused_ordering(359) 00:17:05.202 fused_ordering(360) 00:17:05.202 fused_ordering(361) 00:17:05.202 fused_ordering(362) 00:17:05.202 fused_ordering(363) 00:17:05.202 fused_ordering(364) 00:17:05.202 fused_ordering(365) 00:17:05.202 fused_ordering(366) 00:17:05.202 fused_ordering(367) 00:17:05.202 fused_ordering(368) 00:17:05.202 fused_ordering(369) 00:17:05.202 fused_ordering(370) 00:17:05.202 fused_ordering(371) 00:17:05.202 fused_ordering(372) 00:17:05.202 fused_ordering(373) 00:17:05.202 fused_ordering(374) 00:17:05.202 fused_ordering(375) 00:17:05.202 fused_ordering(376) 00:17:05.202 fused_ordering(377) 00:17:05.202 fused_ordering(378) 00:17:05.202 fused_ordering(379) 00:17:05.202 fused_ordering(380) 00:17:05.202 fused_ordering(381) 00:17:05.202 fused_ordering(382) 00:17:05.202 fused_ordering(383) 00:17:05.202 fused_ordering(384) 00:17:05.202 fused_ordering(385) 00:17:05.202 fused_ordering(386) 00:17:05.202 fused_ordering(387) 00:17:05.202 fused_ordering(388) 00:17:05.202 fused_ordering(389) 00:17:05.202 fused_ordering(390) 00:17:05.202 fused_ordering(391) 00:17:05.202 fused_ordering(392) 00:17:05.202 fused_ordering(393) 00:17:05.202 fused_ordering(394) 00:17:05.202 fused_ordering(395) 00:17:05.202 fused_ordering(396) 00:17:05.202 fused_ordering(397) 00:17:05.202 fused_ordering(398) 00:17:05.202 fused_ordering(399) 00:17:05.202 fused_ordering(400) 00:17:05.202 fused_ordering(401) 00:17:05.202 fused_ordering(402) 00:17:05.202 fused_ordering(403) 00:17:05.202 fused_ordering(404) 00:17:05.202 fused_ordering(405) 00:17:05.202 fused_ordering(406) 00:17:05.202 fused_ordering(407) 00:17:05.202 fused_ordering(408) 00:17:05.202 fused_ordering(409) 00:17:05.202 fused_ordering(410) 00:17:05.768 fused_ordering(411) 00:17:05.768 fused_ordering(412) 00:17:05.768 fused_ordering(413) 00:17:05.768 fused_ordering(414) 00:17:05.768 fused_ordering(415) 00:17:05.768 fused_ordering(416) 00:17:05.769 fused_ordering(417) 00:17:05.769 fused_ordering(418) 00:17:05.769 fused_ordering(419) 00:17:05.769 fused_ordering(420) 00:17:05.769 fused_ordering(421) 00:17:05.769 fused_ordering(422) 00:17:05.769 fused_ordering(423) 00:17:05.769 fused_ordering(424) 00:17:05.769 fused_ordering(425) 00:17:05.769 fused_ordering(426) 00:17:05.769 fused_ordering(427) 00:17:05.769 fused_ordering(428) 00:17:05.769 fused_ordering(429) 00:17:05.769 fused_ordering(430) 00:17:05.769 fused_ordering(431) 00:17:05.769 fused_ordering(432) 00:17:05.769 fused_ordering(433) 00:17:05.769 fused_ordering(434) 00:17:05.769 fused_ordering(435) 00:17:05.769 fused_ordering(436) 00:17:05.769 fused_ordering(437) 00:17:05.769 fused_ordering(438) 00:17:05.769 fused_ordering(439) 00:17:05.769 fused_ordering(440) 00:17:05.769 fused_ordering(441) 00:17:05.769 fused_ordering(442) 00:17:05.769 fused_ordering(443) 00:17:05.769 fused_ordering(444) 00:17:05.769 fused_ordering(445) 00:17:05.769 fused_ordering(446) 00:17:05.769 fused_ordering(447) 00:17:05.769 fused_ordering(448) 00:17:05.769 fused_ordering(449) 00:17:05.769 fused_ordering(450) 00:17:05.769 fused_ordering(451) 00:17:05.769 fused_ordering(452) 00:17:05.769 fused_ordering(453) 00:17:05.769 fused_ordering(454) 00:17:05.769 fused_ordering(455) 00:17:05.769 fused_ordering(456) 00:17:05.769 fused_ordering(457) 00:17:05.769 fused_ordering(458) 00:17:05.769 fused_ordering(459) 00:17:05.769 fused_ordering(460) 00:17:05.769 fused_ordering(461) 00:17:05.769 fused_ordering(462) 00:17:05.769 fused_ordering(463) 00:17:05.769 fused_ordering(464) 00:17:05.769 fused_ordering(465) 00:17:05.769 fused_ordering(466) 00:17:05.769 fused_ordering(467) 00:17:05.769 fused_ordering(468) 00:17:05.769 fused_ordering(469) 00:17:05.769 fused_ordering(470) 00:17:05.769 fused_ordering(471) 00:17:05.769 fused_ordering(472) 00:17:05.769 fused_ordering(473) 00:17:05.769 fused_ordering(474) 00:17:05.769 fused_ordering(475) 00:17:05.769 fused_ordering(476) 00:17:05.769 fused_ordering(477) 00:17:05.769 fused_ordering(478) 00:17:05.769 fused_ordering(479) 00:17:05.769 fused_ordering(480) 00:17:05.769 fused_ordering(481) 00:17:05.769 fused_ordering(482) 00:17:05.769 fused_ordering(483) 00:17:05.769 fused_ordering(484) 00:17:05.769 fused_ordering(485) 00:17:05.769 fused_ordering(486) 00:17:05.769 fused_ordering(487) 00:17:05.769 fused_ordering(488) 00:17:05.769 fused_ordering(489) 00:17:05.769 fused_ordering(490) 00:17:05.769 fused_ordering(491) 00:17:05.769 fused_ordering(492) 00:17:05.769 fused_ordering(493) 00:17:05.769 fused_ordering(494) 00:17:05.769 fused_ordering(495) 00:17:05.769 fused_ordering(496) 00:17:05.769 fused_ordering(497) 00:17:05.769 fused_ordering(498) 00:17:05.769 fused_ordering(499) 00:17:05.769 fused_ordering(500) 00:17:05.769 fused_ordering(501) 00:17:05.769 fused_ordering(502) 00:17:05.769 fused_ordering(503) 00:17:05.769 fused_ordering(504) 00:17:05.769 fused_ordering(505) 00:17:05.769 fused_ordering(506) 00:17:05.769 fused_ordering(507) 00:17:05.769 fused_ordering(508) 00:17:05.769 fused_ordering(509) 00:17:05.769 fused_ordering(510) 00:17:05.769 fused_ordering(511) 00:17:05.769 fused_ordering(512) 00:17:05.769 fused_ordering(513) 00:17:05.769 fused_ordering(514) 00:17:05.769 fused_ordering(515) 00:17:05.769 fused_ordering(516) 00:17:05.769 fused_ordering(517) 00:17:05.769 fused_ordering(518) 00:17:05.769 fused_ordering(519) 00:17:05.769 fused_ordering(520) 00:17:05.769 fused_ordering(521) 00:17:05.769 fused_ordering(522) 00:17:05.769 fused_ordering(523) 00:17:05.769 fused_ordering(524) 00:17:05.769 fused_ordering(525) 00:17:05.769 fused_ordering(526) 00:17:05.769 fused_ordering(527) 00:17:05.769 fused_ordering(528) 00:17:05.769 fused_ordering(529) 00:17:05.769 fused_ordering(530) 00:17:05.769 fused_ordering(531) 00:17:05.769 fused_ordering(532) 00:17:05.769 fused_ordering(533) 00:17:05.769 fused_ordering(534) 00:17:05.769 fused_ordering(535) 00:17:05.769 fused_ordering(536) 00:17:05.769 fused_ordering(537) 00:17:05.769 fused_ordering(538) 00:17:05.769 fused_ordering(539) 00:17:05.769 fused_ordering(540) 00:17:05.769 fused_ordering(541) 00:17:05.769 fused_ordering(542) 00:17:05.769 fused_ordering(543) 00:17:05.769 fused_ordering(544) 00:17:05.769 fused_ordering(545) 00:17:05.769 fused_ordering(546) 00:17:05.769 fused_ordering(547) 00:17:05.769 fused_ordering(548) 00:17:05.769 fused_ordering(549) 00:17:05.769 fused_ordering(550) 00:17:05.769 fused_ordering(551) 00:17:05.769 fused_ordering(552) 00:17:05.769 fused_ordering(553) 00:17:05.769 fused_ordering(554) 00:17:05.769 fused_ordering(555) 00:17:05.769 fused_ordering(556) 00:17:05.769 fused_ordering(557) 00:17:05.769 fused_ordering(558) 00:17:05.769 fused_ordering(559) 00:17:05.769 fused_ordering(560) 00:17:05.769 fused_ordering(561) 00:17:05.769 fused_ordering(562) 00:17:05.769 fused_ordering(563) 00:17:05.769 fused_ordering(564) 00:17:05.769 fused_ordering(565) 00:17:05.769 fused_ordering(566) 00:17:05.769 fused_ordering(567) 00:17:05.769 fused_ordering(568) 00:17:05.769 fused_ordering(569) 00:17:05.769 fused_ordering(570) 00:17:05.769 fused_ordering(571) 00:17:05.769 fused_ordering(572) 00:17:05.769 fused_ordering(573) 00:17:05.769 fused_ordering(574) 00:17:05.769 fused_ordering(575) 00:17:05.769 fused_ordering(576) 00:17:05.769 fused_ordering(577) 00:17:05.769 fused_ordering(578) 00:17:05.769 fused_ordering(579) 00:17:05.769 fused_ordering(580) 00:17:05.769 fused_ordering(581) 00:17:05.769 fused_ordering(582) 00:17:05.769 fused_ordering(583) 00:17:05.769 fused_ordering(584) 00:17:05.769 fused_ordering(585) 00:17:05.769 fused_ordering(586) 00:17:05.769 fused_ordering(587) 00:17:05.769 fused_ordering(588) 00:17:05.769 fused_ordering(589) 00:17:05.769 fused_ordering(590) 00:17:05.769 fused_ordering(591) 00:17:05.769 fused_ordering(592) 00:17:05.769 fused_ordering(593) 00:17:05.769 fused_ordering(594) 00:17:05.769 fused_ordering(595) 00:17:05.769 fused_ordering(596) 00:17:05.769 fused_ordering(597) 00:17:05.769 fused_ordering(598) 00:17:05.769 fused_ordering(599) 00:17:05.769 fused_ordering(600) 00:17:05.769 fused_ordering(601) 00:17:05.769 fused_ordering(602) 00:17:05.769 fused_ordering(603) 00:17:05.769 fused_ordering(604) 00:17:05.769 fused_ordering(605) 00:17:05.769 fused_ordering(606) 00:17:05.769 fused_ordering(607) 00:17:05.769 fused_ordering(608) 00:17:05.769 fused_ordering(609) 00:17:05.770 fused_ordering(610) 00:17:05.770 fused_ordering(611) 00:17:05.770 fused_ordering(612) 00:17:05.770 fused_ordering(613) 00:17:05.770 fused_ordering(614) 00:17:05.770 fused_ordering(615) 00:17:06.335 fused_ordering(616) 00:17:06.335 fused_ordering(617) 00:17:06.335 fused_ordering(618) 00:17:06.336 fused_ordering(619) 00:17:06.336 fused_ordering(620) 00:17:06.336 fused_ordering(621) 00:17:06.336 fused_ordering(622) 00:17:06.336 fused_ordering(623) 00:17:06.336 fused_ordering(624) 00:17:06.336 fused_ordering(625) 00:17:06.336 fused_ordering(626) 00:17:06.336 fused_ordering(627) 00:17:06.336 fused_ordering(628) 00:17:06.336 fused_ordering(629) 00:17:06.336 fused_ordering(630) 00:17:06.336 fused_ordering(631) 00:17:06.336 fused_ordering(632) 00:17:06.336 fused_ordering(633) 00:17:06.336 fused_ordering(634) 00:17:06.336 fused_ordering(635) 00:17:06.336 fused_ordering(636) 00:17:06.336 fused_ordering(637) 00:17:06.336 fused_ordering(638) 00:17:06.336 fused_ordering(639) 00:17:06.336 fused_ordering(640) 00:17:06.336 fused_ordering(641) 00:17:06.336 fused_ordering(642) 00:17:06.336 fused_ordering(643) 00:17:06.336 fused_ordering(644) 00:17:06.336 fused_ordering(645) 00:17:06.336 fused_ordering(646) 00:17:06.336 fused_ordering(647) 00:17:06.336 fused_ordering(648) 00:17:06.336 fused_ordering(649) 00:17:06.336 fused_ordering(650) 00:17:06.336 fused_ordering(651) 00:17:06.336 fused_ordering(652) 00:17:06.336 fused_ordering(653) 00:17:06.336 fused_ordering(654) 00:17:06.336 fused_ordering(655) 00:17:06.336 fused_ordering(656) 00:17:06.336 fused_ordering(657) 00:17:06.336 fused_ordering(658) 00:17:06.336 fused_ordering(659) 00:17:06.336 fused_ordering(660) 00:17:06.336 fused_ordering(661) 00:17:06.336 fused_ordering(662) 00:17:06.336 fused_ordering(663) 00:17:06.336 fused_ordering(664) 00:17:06.336 fused_ordering(665) 00:17:06.336 fused_ordering(666) 00:17:06.336 fused_ordering(667) 00:17:06.336 fused_ordering(668) 00:17:06.336 fused_ordering(669) 00:17:06.336 fused_ordering(670) 00:17:06.336 fused_ordering(671) 00:17:06.336 fused_ordering(672) 00:17:06.336 fused_ordering(673) 00:17:06.336 fused_ordering(674) 00:17:06.336 fused_ordering(675) 00:17:06.336 fused_ordering(676) 00:17:06.336 fused_ordering(677) 00:17:06.336 fused_ordering(678) 00:17:06.336 fused_ordering(679) 00:17:06.336 fused_ordering(680) 00:17:06.336 fused_ordering(681) 00:17:06.336 fused_ordering(682) 00:17:06.336 fused_ordering(683) 00:17:06.336 fused_ordering(684) 00:17:06.336 fused_ordering(685) 00:17:06.336 fused_ordering(686) 00:17:06.336 fused_ordering(687) 00:17:06.336 fused_ordering(688) 00:17:06.336 fused_ordering(689) 00:17:06.336 fused_ordering(690) 00:17:06.336 fused_ordering(691) 00:17:06.336 fused_ordering(692) 00:17:06.336 fused_ordering(693) 00:17:06.336 fused_ordering(694) 00:17:06.336 fused_ordering(695) 00:17:06.336 fused_ordering(696) 00:17:06.336 fused_ordering(697) 00:17:06.336 fused_ordering(698) 00:17:06.336 fused_ordering(699) 00:17:06.336 fused_ordering(700) 00:17:06.336 fused_ordering(701) 00:17:06.336 fused_ordering(702) 00:17:06.336 fused_ordering(703) 00:17:06.336 fused_ordering(704) 00:17:06.336 fused_ordering(705) 00:17:06.336 fused_ordering(706) 00:17:06.336 fused_ordering(707) 00:17:06.336 fused_ordering(708) 00:17:06.336 fused_ordering(709) 00:17:06.336 fused_ordering(710) 00:17:06.336 fused_ordering(711) 00:17:06.336 fused_ordering(712) 00:17:06.336 fused_ordering(713) 00:17:06.336 fused_ordering(714) 00:17:06.336 fused_ordering(715) 00:17:06.336 fused_ordering(716) 00:17:06.336 fused_ordering(717) 00:17:06.336 fused_ordering(718) 00:17:06.336 fused_ordering(719) 00:17:06.336 fused_ordering(720) 00:17:06.336 fused_ordering(721) 00:17:06.336 fused_ordering(722) 00:17:06.336 fused_ordering(723) 00:17:06.336 fused_ordering(724) 00:17:06.336 fused_ordering(725) 00:17:06.336 fused_ordering(726) 00:17:06.336 fused_ordering(727) 00:17:06.336 fused_ordering(728) 00:17:06.336 fused_ordering(729) 00:17:06.336 fused_ordering(730) 00:17:06.336 fused_ordering(731) 00:17:06.336 fused_ordering(732) 00:17:06.336 fused_ordering(733) 00:17:06.336 fused_ordering(734) 00:17:06.336 fused_ordering(735) 00:17:06.336 fused_ordering(736) 00:17:06.336 fused_ordering(737) 00:17:06.336 fused_ordering(738) 00:17:06.336 fused_ordering(739) 00:17:06.336 fused_ordering(740) 00:17:06.336 fused_ordering(741) 00:17:06.336 fused_ordering(742) 00:17:06.336 fused_ordering(743) 00:17:06.336 fused_ordering(744) 00:17:06.336 fused_ordering(745) 00:17:06.336 fused_ordering(746) 00:17:06.336 fused_ordering(747) 00:17:06.336 fused_ordering(748) 00:17:06.336 fused_ordering(749) 00:17:06.336 fused_ordering(750) 00:17:06.336 fused_ordering(751) 00:17:06.336 fused_ordering(752) 00:17:06.336 fused_ordering(753) 00:17:06.336 fused_ordering(754) 00:17:06.336 fused_ordering(755) 00:17:06.336 fused_ordering(756) 00:17:06.336 fused_ordering(757) 00:17:06.336 fused_ordering(758) 00:17:06.336 fused_ordering(759) 00:17:06.336 fused_ordering(760) 00:17:06.336 fused_ordering(761) 00:17:06.336 fused_ordering(762) 00:17:06.336 fused_ordering(763) 00:17:06.336 fused_ordering(764) 00:17:06.336 fused_ordering(765) 00:17:06.336 fused_ordering(766) 00:17:06.336 fused_ordering(767) 00:17:06.336 fused_ordering(768) 00:17:06.336 fused_ordering(769) 00:17:06.336 fused_ordering(770) 00:17:06.336 fused_ordering(771) 00:17:06.336 fused_ordering(772) 00:17:06.336 fused_ordering(773) 00:17:06.336 fused_ordering(774) 00:17:06.336 fused_ordering(775) 00:17:06.336 fused_ordering(776) 00:17:06.336 fused_ordering(777) 00:17:06.336 fused_ordering(778) 00:17:06.336 fused_ordering(779) 00:17:06.336 fused_ordering(780) 00:17:06.336 fused_ordering(781) 00:17:06.336 fused_ordering(782) 00:17:06.336 fused_ordering(783) 00:17:06.336 fused_ordering(784) 00:17:06.336 fused_ordering(785) 00:17:06.336 fused_ordering(786) 00:17:06.336 fused_ordering(787) 00:17:06.336 fused_ordering(788) 00:17:06.336 fused_ordering(789) 00:17:06.336 fused_ordering(790) 00:17:06.336 fused_ordering(791) 00:17:06.336 fused_ordering(792) 00:17:06.336 fused_ordering(793) 00:17:06.336 fused_ordering(794) 00:17:06.336 fused_ordering(795) 00:17:06.336 fused_ordering(796) 00:17:06.336 fused_ordering(797) 00:17:06.336 fused_ordering(798) 00:17:06.336 fused_ordering(799) 00:17:06.336 fused_ordering(800) 00:17:06.336 fused_ordering(801) 00:17:06.336 fused_ordering(802) 00:17:06.336 fused_ordering(803) 00:17:06.336 fused_ordering(804) 00:17:06.336 fused_ordering(805) 00:17:06.336 fused_ordering(806) 00:17:06.336 fused_ordering(807) 00:17:06.336 fused_ordering(808) 00:17:06.336 fused_ordering(809) 00:17:06.336 fused_ordering(810) 00:17:06.336 fused_ordering(811) 00:17:06.336 fused_ordering(812) 00:17:06.336 fused_ordering(813) 00:17:06.336 fused_ordering(814) 00:17:06.336 fused_ordering(815) 00:17:06.336 fused_ordering(816) 00:17:06.336 fused_ordering(817) 00:17:06.336 fused_ordering(818) 00:17:06.336 fused_ordering(819) 00:17:06.336 fused_ordering(820) 00:17:06.902 fused_ordering(821) 00:17:06.902 fused_ordering(822) 00:17:06.902 fused_ordering(823) 00:17:06.902 fused_ordering(824) 00:17:06.902 fused_ordering(825) 00:17:06.902 fused_ordering(826) 00:17:06.902 fused_ordering(827) 00:17:06.902 fused_ordering(828) 00:17:06.902 fused_ordering(829) 00:17:06.902 fused_ordering(830) 00:17:06.902 fused_ordering(831) 00:17:06.902 fused_ordering(832) 00:17:06.902 fused_ordering(833) 00:17:06.902 fused_ordering(834) 00:17:06.902 fused_ordering(835) 00:17:06.902 fused_ordering(836) 00:17:06.902 fused_ordering(837) 00:17:06.902 fused_ordering(838) 00:17:06.902 fused_ordering(839) 00:17:06.902 fused_ordering(840) 00:17:06.902 fused_ordering(841) 00:17:06.902 fused_ordering(842) 00:17:06.902 fused_ordering(843) 00:17:06.902 fused_ordering(844) 00:17:06.902 fused_ordering(845) 00:17:06.902 fused_ordering(846) 00:17:06.902 fused_ordering(847) 00:17:06.902 fused_ordering(848) 00:17:06.902 fused_ordering(849) 00:17:06.902 fused_ordering(850) 00:17:06.902 fused_ordering(851) 00:17:06.902 fused_ordering(852) 00:17:06.902 fused_ordering(853) 00:17:06.902 fused_ordering(854) 00:17:06.902 fused_ordering(855) 00:17:06.902 fused_ordering(856) 00:17:06.902 fused_ordering(857) 00:17:06.902 fused_ordering(858) 00:17:06.902 fused_ordering(859) 00:17:06.902 fused_ordering(860) 00:17:06.902 fused_ordering(861) 00:17:06.902 fused_ordering(862) 00:17:06.902 fused_ordering(863) 00:17:06.902 fused_ordering(864) 00:17:06.902 fused_ordering(865) 00:17:06.902 fused_ordering(866) 00:17:06.902 fused_ordering(867) 00:17:06.902 fused_ordering(868) 00:17:06.902 fused_ordering(869) 00:17:06.902 fused_ordering(870) 00:17:06.902 fused_ordering(871) 00:17:06.902 fused_ordering(872) 00:17:06.902 fused_ordering(873) 00:17:06.902 fused_ordering(874) 00:17:06.902 fused_ordering(875) 00:17:06.902 fused_ordering(876) 00:17:06.902 fused_ordering(877) 00:17:06.902 fused_ordering(878) 00:17:06.902 fused_ordering(879) 00:17:06.902 fused_ordering(880) 00:17:06.902 fused_ordering(881) 00:17:06.902 fused_ordering(882) 00:17:06.902 fused_ordering(883) 00:17:06.902 fused_ordering(884) 00:17:06.902 fused_ordering(885) 00:17:06.902 fused_ordering(886) 00:17:06.902 fused_ordering(887) 00:17:06.902 fused_ordering(888) 00:17:06.902 fused_ordering(889) 00:17:06.902 fused_ordering(890) 00:17:06.902 fused_ordering(891) 00:17:06.902 fused_ordering(892) 00:17:06.902 fused_ordering(893) 00:17:06.902 fused_ordering(894) 00:17:06.902 fused_ordering(895) 00:17:06.902 fused_ordering(896) 00:17:06.902 fused_ordering(897) 00:17:06.902 fused_ordering(898) 00:17:06.902 fused_ordering(899) 00:17:06.902 fused_ordering(900) 00:17:06.902 fused_ordering(901) 00:17:06.902 fused_ordering(902) 00:17:06.902 fused_ordering(903) 00:17:06.902 fused_ordering(904) 00:17:06.902 fused_ordering(905) 00:17:06.902 fused_ordering(906) 00:17:06.902 fused_ordering(907) 00:17:06.903 fused_ordering(908) 00:17:06.903 fused_ordering(909) 00:17:06.903 fused_ordering(910) 00:17:06.903 fused_ordering(911) 00:17:06.903 fused_ordering(912) 00:17:06.903 fused_ordering(913) 00:17:06.903 fused_ordering(914) 00:17:06.903 fused_ordering(915) 00:17:06.903 fused_ordering(916) 00:17:06.903 fused_ordering(917) 00:17:06.903 fused_ordering(918) 00:17:06.903 fused_ordering(919) 00:17:06.903 fused_ordering(920) 00:17:06.903 fused_ordering(921) 00:17:06.903 fused_ordering(922) 00:17:06.903 fused_ordering(923) 00:17:06.903 fused_ordering(924) 00:17:06.903 fused_ordering(925) 00:17:06.903 fused_ordering(926) 00:17:06.903 fused_ordering(927) 00:17:06.903 fused_ordering(928) 00:17:06.903 fused_ordering(929) 00:17:06.903 fused_ordering(930) 00:17:06.903 fused_ordering(931) 00:17:06.903 fused_ordering(932) 00:17:06.903 fused_ordering(933) 00:17:06.903 fused_ordering(934) 00:17:06.903 fused_ordering(935) 00:17:06.903 fused_ordering(936) 00:17:06.903 fused_ordering(937) 00:17:06.903 fused_ordering(938) 00:17:06.903 fused_ordering(939) 00:17:06.903 fused_ordering(940) 00:17:06.903 fused_ordering(941) 00:17:06.903 fused_ordering(942) 00:17:06.903 fused_ordering(943) 00:17:06.903 fused_ordering(944) 00:17:06.903 fused_ordering(945) 00:17:06.903 fused_ordering(946) 00:17:06.903 fused_ordering(947) 00:17:06.903 fused_ordering(948) 00:17:06.903 fused_ordering(949) 00:17:06.903 fused_ordering(950) 00:17:06.903 fused_ordering(951) 00:17:06.903 fused_ordering(952) 00:17:06.903 fused_ordering(953) 00:17:06.903 fused_ordering(954) 00:17:06.903 fused_ordering(955) 00:17:06.903 fused_ordering(956) 00:17:06.903 fused_ordering(957) 00:17:06.903 fused_ordering(958) 00:17:06.903 fused_ordering(959) 00:17:06.903 fused_ordering(960) 00:17:06.903 fused_ordering(961) 00:17:06.903 fused_ordering(962) 00:17:06.903 fused_ordering(963) 00:17:06.903 fused_ordering(964) 00:17:06.903 fused_ordering(965) 00:17:06.903 fused_ordering(966) 00:17:06.903 fused_ordering(967) 00:17:06.903 fused_ordering(968) 00:17:06.903 fused_ordering(969) 00:17:06.903 fused_ordering(970) 00:17:06.903 fused_ordering(971) 00:17:06.903 fused_ordering(972) 00:17:06.903 fused_ordering(973) 00:17:06.903 fused_ordering(974) 00:17:06.903 fused_ordering(975) 00:17:06.903 fused_ordering(976) 00:17:06.903 fused_ordering(977) 00:17:06.903 fused_ordering(978) 00:17:06.903 fused_ordering(979) 00:17:06.903 fused_ordering(980) 00:17:06.903 fused_ordering(981) 00:17:06.903 fused_ordering(982) 00:17:06.903 fused_ordering(983) 00:17:06.903 fused_ordering(984) 00:17:06.903 fused_ordering(985) 00:17:06.903 fused_ordering(986) 00:17:06.903 fused_ordering(987) 00:17:06.903 fused_ordering(988) 00:17:06.903 fused_ordering(989) 00:17:06.903 fused_ordering(990) 00:17:06.903 fused_ordering(991) 00:17:06.903 fused_ordering(992) 00:17:06.903 fused_ordering(993) 00:17:06.903 fused_ordering(994) 00:17:06.903 fused_ordering(995) 00:17:06.903 fused_ordering(996) 00:17:06.903 fused_ordering(997) 00:17:06.903 fused_ordering(998) 00:17:06.903 fused_ordering(999) 00:17:06.903 fused_ordering(1000) 00:17:06.903 fused_ordering(1001) 00:17:06.903 fused_ordering(1002) 00:17:06.903 fused_ordering(1003) 00:17:06.903 fused_ordering(1004) 00:17:06.903 fused_ordering(1005) 00:17:06.903 fused_ordering(1006) 00:17:06.903 fused_ordering(1007) 00:17:06.903 fused_ordering(1008) 00:17:06.903 fused_ordering(1009) 00:17:06.903 fused_ordering(1010) 00:17:06.903 fused_ordering(1011) 00:17:06.903 fused_ordering(1012) 00:17:06.903 fused_ordering(1013) 00:17:06.903 fused_ordering(1014) 00:17:06.903 fused_ordering(1015) 00:17:06.903 fused_ordering(1016) 00:17:06.903 fused_ordering(1017) 00:17:06.903 fused_ordering(1018) 00:17:06.903 fused_ordering(1019) 00:17:06.903 fused_ordering(1020) 00:17:06.903 fused_ordering(1021) 00:17:06.903 fused_ordering(1022) 00:17:06.903 fused_ordering(1023) 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:06.903 rmmod nvme_tcp 00:17:06.903 rmmod nvme_fabrics 00:17:06.903 rmmod nvme_keyring 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 158152 ']' 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 158152 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 158152 ']' 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 158152 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.903 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158152 00:17:07.162 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:07.162 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158152' 00:17:07.163 killing process with pid 158152 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 158152 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 158152 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.163 23:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:09.697 00:17:09.697 real 0m7.446s 00:17:09.697 user 0m4.927s 00:17:09.697 sys 0m3.090s 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:09.697 ************************************ 00:17:09.697 END TEST nvmf_fused_ordering 00:17:09.697 ************************************ 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:09.697 ************************************ 00:17:09.697 START TEST nvmf_ns_masking 00:17:09.697 ************************************ 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:09.697 * Looking for test storage... 00:17:09.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:09.697 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:09.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.698 --rc genhtml_branch_coverage=1 00:17:09.698 --rc genhtml_function_coverage=1 00:17:09.698 --rc genhtml_legend=1 00:17:09.698 --rc geninfo_all_blocks=1 00:17:09.698 --rc geninfo_unexecuted_blocks=1 00:17:09.698 00:17:09.698 ' 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:09.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.698 --rc genhtml_branch_coverage=1 00:17:09.698 --rc genhtml_function_coverage=1 00:17:09.698 --rc genhtml_legend=1 00:17:09.698 --rc geninfo_all_blocks=1 00:17:09.698 --rc geninfo_unexecuted_blocks=1 00:17:09.698 00:17:09.698 ' 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:09.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.698 --rc genhtml_branch_coverage=1 00:17:09.698 --rc genhtml_function_coverage=1 00:17:09.698 --rc genhtml_legend=1 00:17:09.698 --rc geninfo_all_blocks=1 00:17:09.698 --rc geninfo_unexecuted_blocks=1 00:17:09.698 00:17:09.698 ' 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:09.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.698 --rc genhtml_branch_coverage=1 00:17:09.698 --rc genhtml_function_coverage=1 00:17:09.698 --rc genhtml_legend=1 00:17:09.698 --rc geninfo_all_blocks=1 00:17:09.698 --rc geninfo_unexecuted_blocks=1 00:17:09.698 00:17:09.698 ' 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:09.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4c6afb73-3945-4d86-8c14-9f93ed4c01b1 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9a125545-f463-4df4-b78a-ba27b6f6b045 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a8fdedb1-82a4-4d7c-b45e-d6ad84a25d9d 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.698 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.699 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.699 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:09.699 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:09.699 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:09.699 23:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:11.601 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.601 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:11.602 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:11.602 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:11.602 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:11.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:17:11.602 00:17:11.602 --- 10.0.0.2 ping statistics --- 00:17:11.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.602 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:17:11.602 00:17:11.602 --- 10.0.0.1 ping statistics --- 00:17:11.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.602 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:11.602 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=160499 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 160499 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 160499 ']' 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.861 23:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:11.861 [2024-11-19 23:41:45.983761] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:17:11.861 [2024-11-19 23:41:45.983857] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.861 [2024-11-19 23:41:46.060236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.861 [2024-11-19 23:41:46.108157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.861 [2024-11-19 23:41:46.108225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.861 [2024-11-19 23:41:46.108250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.861 [2024-11-19 23:41:46.108264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.861 [2024-11-19 23:41:46.108275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.861 [2024-11-19 23:41:46.108983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.119 23:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.119 23:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:12.119 23:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.119 23:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:12.119 23:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:12.119 23:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.119 23:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:12.377 [2024-11-19 23:41:46.499266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.377 23:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:12.377 23:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:12.377 23:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:12.635 Malloc1 00:17:12.635 23:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:12.893 Malloc2 00:17:12.893 23:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:13.150 23:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:13.420 23:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.678 [2024-11-19 23:41:47.942305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.678 23:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:13.678 23:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a8fdedb1-82a4-4d7c-b45e-d6ad84a25d9d -a 10.0.0.2 -s 4420 -i 4 00:17:13.936 23:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:13.936 23:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:13.936 23:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:13.936 23:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:13.936 23:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:16.464 [ 0]:0x1 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82ab30fe0d4749d5828cb5fe6ba035f9 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82ab30fe0d4749d5828cb5fe6ba035f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:16.464 [ 0]:0x1 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82ab30fe0d4749d5828cb5fe6ba035f9 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82ab30fe0d4749d5828cb5fe6ba035f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:16.464 [ 1]:0x2 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=424d4a7822634318854ad7f9f1baa02d 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 424d4a7822634318854ad7f9f1baa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:16.464 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:16.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.723 23:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:16.981 23:41:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:17.239 23:41:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:17.239 23:41:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a8fdedb1-82a4-4d7c-b45e-d6ad84a25d9d -a 10.0.0.2 -s 4420 -i 4 00:17:17.497 23:41:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:17.497 23:41:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:17.497 23:41:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.497 23:41:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:17.497 23:41:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:17.497 23:41:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:19.398 [ 0]:0x2 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:19.398 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:19.656 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=424d4a7822634318854ad7f9f1baa02d 00:17:19.656 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 424d4a7822634318854ad7f9f1baa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:19.656 23:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:19.915 [ 0]:0x1 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82ab30fe0d4749d5828cb5fe6ba035f9 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82ab30fe0d4749d5828cb5fe6ba035f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:19.915 [ 1]:0x2 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=424d4a7822634318854ad7f9f1baa02d 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 424d4a7822634318854ad7f9f1baa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:19.915 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:20.179 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:20.179 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:20.179 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:20.179 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:20.179 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.179 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:20.179 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.179 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:20.179 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.179 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:20.179 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:20.179 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.510 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:20.510 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.510 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:20.510 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.510 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.510 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.510 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:20.510 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.510 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:20.510 [ 0]:0x2 00:17:20.510 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:20.510 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.510 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=424d4a7822634318854ad7f9f1baa02d 00:17:20.511 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 424d4a7822634318854ad7f9f1baa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.511 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:20.511 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.511 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:20.776 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:20.776 23:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a8fdedb1-82a4-4d7c-b45e-d6ad84a25d9d -a 10.0.0.2 -s 4420 -i 4 00:17:20.776 23:41:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:20.776 23:41:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:20.776 23:41:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.776 23:41:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:20.776 23:41:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:20.776 23:41:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:23.306 [ 0]:0x1 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=82ab30fe0d4749d5828cb5fe6ba035f9 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 82ab30fe0d4749d5828cb5fe6ba035f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:23.306 [ 1]:0x2 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=424d4a7822634318854ad7f9f1baa02d 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 424d4a7822634318854ad7f9f1baa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:23.306 [ 0]:0x2 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:23.306 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=424d4a7822634318854ad7f9f1baa02d 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 424d4a7822634318854ad7f9f1baa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:23.565 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:23.823 [2024-11-19 23:41:57.876931] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:23.823 request: 00:17:23.823 { 00:17:23.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:23.823 "nsid": 2, 00:17:23.823 "host": "nqn.2016-06.io.spdk:host1", 00:17:23.823 "method": "nvmf_ns_remove_host", 00:17:23.823 "req_id": 1 00:17:23.823 } 00:17:23.823 Got JSON-RPC error response 00:17:23.823 response: 00:17:23.823 { 00:17:23.823 "code": -32602, 00:17:23.823 "message": "Invalid parameters" 00:17:23.823 } 00:17:23.823 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:23.823 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.823 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.823 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:23.824 [ 0]:0x2 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=424d4a7822634318854ad7f9f1baa02d 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 424d4a7822634318854ad7f9f1baa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:23.824 23:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.824 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=162125 00:17:23.824 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:23.824 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.824 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 162125 /var/tmp/host.sock 00:17:23.824 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 162125 ']' 00:17:23.824 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:23.824 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.824 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:23.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:23.824 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.824 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:24.082 [2024-11-19 23:41:58.185970] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:17:24.082 [2024-11-19 23:41:58.186053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162125 ] 00:17:24.082 [2024-11-19 23:41:58.262472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.082 [2024-11-19 23:41:58.313009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.340 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.340 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:24.340 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.598 23:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:24.857 23:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4c6afb73-3945-4d86-8c14-9f93ed4c01b1 00:17:24.857 23:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:24.857 23:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4C6AFB7339454D868C149F93ED4C01B1 -i 00:17:25.114 23:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9a125545-f463-4df4-b78a-ba27b6f6b045 00:17:25.114 23:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:25.114 23:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9A125545F4634DF4B78ABA27B6F6B045 -i 00:17:25.679 23:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:25.679 23:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:26.244 23:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:26.244 23:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:26.502 nvme0n1 00:17:26.502 23:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:26.502 23:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:27.067 nvme1n2 00:17:27.067 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:27.067 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:27.067 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:27.067 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:27.067 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:27.325 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:27.325 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:27.325 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:27.325 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:27.582 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4c6afb73-3945-4d86-8c14-9f93ed4c01b1 == \4\c\6\a\f\b\7\3\-\3\9\4\5\-\4\d\8\6\-\8\c\1\4\-\9\f\9\3\e\d\4\c\0\1\b\1 ]] 00:17:27.582 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:27.582 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:27.582 23:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:27.840 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9a125545-f463-4df4-b78a-ba27b6f6b045 == \9\a\1\2\5\5\4\5\-\f\4\6\3\-\4\d\f\4\-\b\7\8\a\-\b\a\2\7\b\6\f\6\b\0\4\5 ]] 00:17:27.840 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:28.097 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 4c6afb73-3945-4d86-8c14-9f93ed4c01b1 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4C6AFB7339454D868C149F93ED4C01B1 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4C6AFB7339454D868C149F93ED4C01B1 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:28.356 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4C6AFB7339454D868C149F93ED4C01B1 00:17:28.613 [2024-11-19 23:42:02.895928] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:28.614 [2024-11-19 23:42:02.895982] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:28.614 [2024-11-19 23:42:02.896000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.614 request: 00:17:28.614 { 00:17:28.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.614 "namespace": { 00:17:28.614 "bdev_name": "invalid", 00:17:28.614 "nsid": 1, 00:17:28.614 "nguid": "4C6AFB7339454D868C149F93ED4C01B1", 00:17:28.614 "no_auto_visible": false 00:17:28.614 }, 00:17:28.614 "method": "nvmf_subsystem_add_ns", 00:17:28.614 "req_id": 1 00:17:28.614 } 00:17:28.614 Got JSON-RPC error response 00:17:28.614 response: 00:17:28.614 { 00:17:28.614 "code": -32602, 00:17:28.614 "message": "Invalid parameters" 00:17:28.614 } 00:17:28.614 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:28.614 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:28.614 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:28.614 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:28.614 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 4c6afb73-3945-4d86-8c14-9f93ed4c01b1 00:17:28.614 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:28.614 23:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4C6AFB7339454D868C149F93ED4C01B1 -i 00:17:29.178 23:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:31.075 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:31.075 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:31.075 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:31.333 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:31.333 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 162125 00:17:31.333 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 162125 ']' 00:17:31.333 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 162125 00:17:31.333 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:31.333 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.333 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 162125 00:17:31.333 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:31.333 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:31.333 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 162125' 00:17:31.333 killing process with pid 162125 00:17:31.333 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 162125 00:17:31.333 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 162125 00:17:31.591 23:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:32.156 rmmod nvme_tcp 00:17:32.156 rmmod nvme_fabrics 00:17:32.156 rmmod nvme_keyring 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 160499 ']' 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 160499 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 160499 ']' 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 160499 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 160499 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 160499' 00:17:32.156 killing process with pid 160499 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 160499 00:17:32.156 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 160499 00:17:32.415 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:32.415 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:32.415 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:32.415 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:32.415 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:32.415 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:32.415 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:32.415 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:32.415 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:32.415 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.415 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.415 23:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.944 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:34.944 00:17:34.944 real 0m25.112s 00:17:34.944 user 0m36.730s 00:17:34.944 sys 0m4.594s 00:17:34.944 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.944 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:34.944 ************************************ 00:17:34.944 END TEST nvmf_ns_masking 00:17:34.944 ************************************ 00:17:34.944 23:42:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:34.944 23:42:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:34.945 ************************************ 00:17:34.945 START TEST nvmf_nvme_cli 00:17:34.945 ************************************ 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:34.945 * Looking for test storage... 00:17:34.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:34.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.945 --rc genhtml_branch_coverage=1 00:17:34.945 --rc genhtml_function_coverage=1 00:17:34.945 --rc genhtml_legend=1 00:17:34.945 --rc geninfo_all_blocks=1 00:17:34.945 --rc geninfo_unexecuted_blocks=1 00:17:34.945 00:17:34.945 ' 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:34.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.945 --rc genhtml_branch_coverage=1 00:17:34.945 --rc genhtml_function_coverage=1 00:17:34.945 --rc genhtml_legend=1 00:17:34.945 --rc geninfo_all_blocks=1 00:17:34.945 --rc geninfo_unexecuted_blocks=1 00:17:34.945 00:17:34.945 ' 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:34.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.945 --rc genhtml_branch_coverage=1 00:17:34.945 --rc genhtml_function_coverage=1 00:17:34.945 --rc genhtml_legend=1 00:17:34.945 --rc geninfo_all_blocks=1 00:17:34.945 --rc geninfo_unexecuted_blocks=1 00:17:34.945 00:17:34.945 ' 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:34.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.945 --rc genhtml_branch_coverage=1 00:17:34.945 --rc genhtml_function_coverage=1 00:17:34.945 --rc genhtml_legend=1 00:17:34.945 --rc geninfo_all_blocks=1 00:17:34.945 --rc geninfo_unexecuted_blocks=1 00:17:34.945 00:17:34.945 ' 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.945 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:34.946 23:42:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:36.847 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:36.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.847 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:36.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:36.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:36.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:17:36.848 00:17:36.848 --- 10.0.0.2 ping statistics --- 00:17:36.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.848 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:17:36.848 00:17:36.848 --- 10.0.0.1 ping statistics --- 00:17:36.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.848 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=165648 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 165648 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 165648 ']' 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.848 23:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:36.848 [2024-11-19 23:42:11.021248] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:17:36.848 [2024-11-19 23:42:11.021342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.848 [2024-11-19 23:42:11.100568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.848 [2024-11-19 23:42:11.152963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.848 [2024-11-19 23:42:11.153037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.848 [2024-11-19 23:42:11.153084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.848 [2024-11-19 23:42:11.153118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.848 [2024-11-19 23:42:11.153136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.848 [2024-11-19 23:42:11.154922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.848 [2024-11-19 23:42:11.154979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.848 [2024-11-19 23:42:11.155039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.848 [2024-11-19 23:42:11.155058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.107 [2024-11-19 23:42:11.305701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.107 Malloc0 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.107 Malloc1 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.107 [2024-11-19 23:42:11.400486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:37.107 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.108 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:37.365 00:17:37.365 Discovery Log Number of Records 2, Generation counter 2 00:17:37.365 =====Discovery Log Entry 0====== 00:17:37.365 trtype: tcp 00:17:37.365 adrfam: ipv4 00:17:37.365 subtype: current discovery subsystem 00:17:37.365 treq: not required 00:17:37.365 portid: 0 00:17:37.365 trsvcid: 4420 00:17:37.365 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:37.365 traddr: 10.0.0.2 00:17:37.365 eflags: explicit discovery connections, duplicate discovery information 00:17:37.365 sectype: none 00:17:37.365 =====Discovery Log Entry 1====== 00:17:37.365 trtype: tcp 00:17:37.365 adrfam: ipv4 00:17:37.365 subtype: nvme subsystem 00:17:37.365 treq: not required 00:17:37.365 portid: 0 00:17:37.365 trsvcid: 4420 00:17:37.365 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:37.365 traddr: 10.0.0.2 00:17:37.365 eflags: none 00:17:37.365 sectype: none 00:17:37.365 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:37.365 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:37.365 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:37.365 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:37.365 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:37.365 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:37.365 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:37.365 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:37.365 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:37.365 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:37.365 23:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:37.930 23:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:37.930 23:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:37.930 23:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.930 23:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:37.930 23:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:37.930 23:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:40.455 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:40.455 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:40.455 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:40.455 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:40.455 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:40.455 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:40.455 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:40.455 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:40.455 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:40.456 /dev/nvme0n2 ]] 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:40.456 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.714 rmmod nvme_tcp 00:17:40.714 rmmod nvme_fabrics 00:17:40.714 rmmod nvme_keyring 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 165648 ']' 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 165648 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 165648 ']' 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 165648 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165648 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165648' 00:17:40.714 killing process with pid 165648 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 165648 00:17:40.714 23:42:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 165648 00:17:40.972 23:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:40.972 23:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:40.972 23:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:40.972 23:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:40.972 23:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:40.972 23:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:40.972 23:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:40.972 23:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:40.972 23:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:40.972 23:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.972 23:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.972 23:42:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.504 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:43.504 00:17:43.504 real 0m8.494s 00:17:43.504 user 0m16.567s 00:17:43.504 sys 0m2.216s 00:17:43.504 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:43.505 ************************************ 00:17:43.505 END TEST nvmf_nvme_cli 00:17:43.505 ************************************ 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:43.505 ************************************ 00:17:43.505 START TEST nvmf_vfio_user 00:17:43.505 ************************************ 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:43.505 * Looking for test storage... 00:17:43.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:43.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.505 --rc genhtml_branch_coverage=1 00:17:43.505 --rc genhtml_function_coverage=1 00:17:43.505 --rc genhtml_legend=1 00:17:43.505 --rc geninfo_all_blocks=1 00:17:43.505 --rc geninfo_unexecuted_blocks=1 00:17:43.505 00:17:43.505 ' 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:43.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.505 --rc genhtml_branch_coverage=1 00:17:43.505 --rc genhtml_function_coverage=1 00:17:43.505 --rc genhtml_legend=1 00:17:43.505 --rc geninfo_all_blocks=1 00:17:43.505 --rc geninfo_unexecuted_blocks=1 00:17:43.505 00:17:43.505 ' 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:43.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.505 --rc genhtml_branch_coverage=1 00:17:43.505 --rc genhtml_function_coverage=1 00:17:43.505 --rc genhtml_legend=1 00:17:43.505 --rc geninfo_all_blocks=1 00:17:43.505 --rc geninfo_unexecuted_blocks=1 00:17:43.505 00:17:43.505 ' 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:43.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.505 --rc genhtml_branch_coverage=1 00:17:43.505 --rc genhtml_function_coverage=1 00:17:43.505 --rc genhtml_legend=1 00:17:43.505 --rc geninfo_all_blocks=1 00:17:43.505 --rc geninfo_unexecuted_blocks=1 00:17:43.505 00:17:43.505 ' 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.505 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=166586 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 166586' 00:17:43.506 Process pid: 166586 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 166586 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 166586 ']' 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:43.506 [2024-11-19 23:42:17.486950] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:17:43.506 [2024-11-19 23:42:17.487038] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.506 [2024-11-19 23:42:17.555145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.506 [2024-11-19 23:42:17.602420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.506 [2024-11-19 23:42:17.602476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.506 [2024-11-19 23:42:17.602491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.506 [2024-11-19 23:42:17.602502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.506 [2024-11-19 23:42:17.602512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.506 [2024-11-19 23:42:17.604006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.506 [2024-11-19 23:42:17.604079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.506 [2024-11-19 23:42:17.604135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.506 [2024-11-19 23:42:17.604138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:43.506 23:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:44.438 23:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:45.017 23:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:45.017 23:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:45.017 23:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:45.017 23:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:45.017 23:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:45.276 Malloc1 00:17:45.276 23:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:45.533 23:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:45.792 23:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:46.050 23:42:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:46.050 23:42:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:46.050 23:42:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:46.309 Malloc2 00:17:46.309 23:42:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:46.640 23:42:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:46.898 23:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:47.157 23:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:47.157 23:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:47.157 23:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:47.157 23:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:47.157 23:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:47.157 23:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:47.157 [2024-11-19 23:42:21.427889] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:17:47.157 [2024-11-19 23:42:21.427926] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167012 ] 00:17:47.418 [2024-11-19 23:42:21.474881] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:47.418 [2024-11-19 23:42:21.487575] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:47.418 [2024-11-19 23:42:21.487603] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2014d77000 00:17:47.418 [2024-11-19 23:42:21.488565] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.418 [2024-11-19 23:42:21.489555] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.418 [2024-11-19 23:42:21.490561] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.418 [2024-11-19 23:42:21.491568] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.418 [2024-11-19 23:42:21.492571] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.418 [2024-11-19 23:42:21.493575] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.418 [2024-11-19 23:42:21.494579] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:47.418 [2024-11-19 23:42:21.495589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:47.418 [2024-11-19 23:42:21.496592] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:47.418 [2024-11-19 23:42:21.496612] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2013a6f000 00:17:47.418 [2024-11-19 23:42:21.497729] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:47.418 [2024-11-19 23:42:21.513350] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:47.418 [2024-11-19 23:42:21.513409] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:47.419 [2024-11-19 23:42:21.515706] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:47.419 [2024-11-19 23:42:21.515762] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:47.419 [2024-11-19 23:42:21.515848] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:47.419 [2024-11-19 23:42:21.515874] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:47.419 [2024-11-19 23:42:21.515885] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:47.419 [2024-11-19 23:42:21.516698] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:47.419 [2024-11-19 23:42:21.516717] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:47.419 [2024-11-19 23:42:21.516729] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:47.419 [2024-11-19 23:42:21.517699] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:47.419 [2024-11-19 23:42:21.517719] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:47.419 [2024-11-19 23:42:21.517732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:47.419 [2024-11-19 23:42:21.518704] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:47.419 [2024-11-19 23:42:21.518722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:47.419 [2024-11-19 23:42:21.519707] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:47.419 [2024-11-19 23:42:21.519726] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:47.419 [2024-11-19 23:42:21.519734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:47.419 [2024-11-19 23:42:21.519745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:47.419 [2024-11-19 23:42:21.519859] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:47.419 [2024-11-19 23:42:21.519867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:47.419 [2024-11-19 23:42:21.519875] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:47.419 [2024-11-19 23:42:21.520718] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:47.419 [2024-11-19 23:42:21.521720] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:47.419 [2024-11-19 23:42:21.522723] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:47.419 [2024-11-19 23:42:21.523718] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:47.419 [2024-11-19 23:42:21.523823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:47.419 [2024-11-19 23:42:21.524733] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:47.419 [2024-11-19 23:42:21.524750] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:47.419 [2024-11-19 23:42:21.524759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:47.419 [2024-11-19 23:42:21.524782] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:47.419 [2024-11-19 23:42:21.524797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:47.419 [2024-11-19 23:42:21.524820] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.419 [2024-11-19 23:42:21.524830] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.419 [2024-11-19 23:42:21.524836] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.419 [2024-11-19 23:42:21.524854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.419 [2024-11-19 23:42:21.524919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:47.419 [2024-11-19 23:42:21.524934] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:47.419 [2024-11-19 23:42:21.524942] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:47.419 [2024-11-19 23:42:21.524949] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:47.419 [2024-11-19 23:42:21.524957] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:47.419 [2024-11-19 23:42:21.524968] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:47.419 [2024-11-19 23:42:21.524976] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:47.419 [2024-11-19 23:42:21.524984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:47.419 [2024-11-19 23:42:21.525002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:47.419 [2024-11-19 23:42:21.525018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:47.419 [2024-11-19 23:42:21.525032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:47.419 [2024-11-19 23:42:21.525047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.419 [2024-11-19 23:42:21.525096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.419 [2024-11-19 23:42:21.525109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.419 [2024-11-19 23:42:21.525120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.419 [2024-11-19 23:42:21.525128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:47.419 [2024-11-19 23:42:21.525140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:47.419 [2024-11-19 23:42:21.525153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:47.419 [2024-11-19 23:42:21.525165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:47.419 [2024-11-19 23:42:21.525180] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:47.419 [2024-11-19 23:42:21.525189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:47.420 [2024-11-19 23:42:21.525233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:47.420 [2024-11-19 23:42:21.525300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525329] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:47.420 [2024-11-19 23:42:21.525337] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:47.420 [2024-11-19 23:42:21.525343] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.420 [2024-11-19 23:42:21.525363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:47.420 [2024-11-19 23:42:21.525394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:47.420 [2024-11-19 23:42:21.525415] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:47.420 [2024-11-19 23:42:21.525434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525460] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.420 [2024-11-19 23:42:21.525468] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.420 [2024-11-19 23:42:21.525474] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.420 [2024-11-19 23:42:21.525482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.420 [2024-11-19 23:42:21.525507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:47.420 [2024-11-19 23:42:21.525528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525553] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:47.420 [2024-11-19 23:42:21.525561] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.420 [2024-11-19 23:42:21.525566] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.420 [2024-11-19 23:42:21.525575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.420 [2024-11-19 23:42:21.525591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:47.420 [2024-11-19 23:42:21.525604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525661] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:47.420 [2024-11-19 23:42:21.525668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:47.420 [2024-11-19 23:42:21.525676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:47.420 [2024-11-19 23:42:21.525701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:47.420 [2024-11-19 23:42:21.525718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:47.420 [2024-11-19 23:42:21.525739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:47.420 [2024-11-19 23:42:21.525751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:47.420 [2024-11-19 23:42:21.525767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:47.420 [2024-11-19 23:42:21.525781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:47.420 [2024-11-19 23:42:21.525796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:47.420 [2024-11-19 23:42:21.525806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:47.420 [2024-11-19 23:42:21.525827] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:47.420 [2024-11-19 23:42:21.525837] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:47.420 [2024-11-19 23:42:21.525842] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:47.420 [2024-11-19 23:42:21.525848] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:47.420 [2024-11-19 23:42:21.525853] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:47.420 [2024-11-19 23:42:21.525862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:47.420 [2024-11-19 23:42:21.525873] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:47.420 [2024-11-19 23:42:21.525880] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:47.420 [2024-11-19 23:42:21.525886] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.420 [2024-11-19 23:42:21.525894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:47.420 [2024-11-19 23:42:21.525904] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:47.420 [2024-11-19 23:42:21.525911] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:47.420 [2024-11-19 23:42:21.525917] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.420 [2024-11-19 23:42:21.525925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:47.420 [2024-11-19 23:42:21.525936] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:47.420 [2024-11-19 23:42:21.525943] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:47.420 [2024-11-19 23:42:21.525949] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:47.420 [2024-11-19 23:42:21.525957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:47.420 [2024-11-19 23:42:21.525968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:47.420 [2024-11-19 23:42:21.525989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:47.420 [2024-11-19 23:42:21.526006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:47.420 [2024-11-19 23:42:21.526018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:47.420 ===================================================== 00:17:47.420 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:47.420 ===================================================== 00:17:47.420 Controller Capabilities/Features 00:17:47.420 ================================ 00:17:47.420 Vendor ID: 4e58 00:17:47.420 Subsystem Vendor ID: 4e58 00:17:47.420 Serial Number: SPDK1 00:17:47.420 Model Number: SPDK bdev Controller 00:17:47.420 Firmware Version: 25.01 00:17:47.420 Recommended Arb Burst: 6 00:17:47.420 IEEE OUI Identifier: 8d 6b 50 00:17:47.420 Multi-path I/O 00:17:47.420 May have multiple subsystem ports: Yes 00:17:47.420 May have multiple controllers: Yes 00:17:47.420 Associated with SR-IOV VF: No 00:17:47.420 Max Data Transfer Size: 131072 00:17:47.421 Max Number of Namespaces: 32 00:17:47.421 Max Number of I/O Queues: 127 00:17:47.421 NVMe Specification Version (VS): 1.3 00:17:47.421 NVMe Specification Version (Identify): 1.3 00:17:47.421 Maximum Queue Entries: 256 00:17:47.421 Contiguous Queues Required: Yes 00:17:47.421 Arbitration Mechanisms Supported 00:17:47.421 Weighted Round Robin: Not Supported 00:17:47.421 Vendor Specific: Not Supported 00:17:47.421 Reset Timeout: 15000 ms 00:17:47.421 Doorbell Stride: 4 bytes 00:17:47.421 NVM Subsystem Reset: Not Supported 00:17:47.421 Command Sets Supported 00:17:47.421 NVM Command Set: Supported 00:17:47.421 Boot Partition: Not Supported 00:17:47.421 Memory Page Size Minimum: 4096 bytes 00:17:47.421 Memory Page Size Maximum: 4096 bytes 00:17:47.421 Persistent Memory Region: Not Supported 00:17:47.421 Optional Asynchronous Events Supported 00:17:47.421 Namespace Attribute Notices: Supported 00:17:47.421 Firmware Activation Notices: Not Supported 00:17:47.421 ANA Change Notices: Not Supported 00:17:47.421 PLE Aggregate Log Change Notices: Not Supported 00:17:47.421 LBA Status Info Alert Notices: Not Supported 00:17:47.421 EGE Aggregate Log Change Notices: Not Supported 00:17:47.421 Normal NVM Subsystem Shutdown event: Not Supported 00:17:47.421 Zone Descriptor Change Notices: Not Supported 00:17:47.421 Discovery Log Change Notices: Not Supported 00:17:47.421 Controller Attributes 00:17:47.421 128-bit Host Identifier: Supported 00:17:47.421 Non-Operational Permissive Mode: Not Supported 00:17:47.421 NVM Sets: Not Supported 00:17:47.421 Read Recovery Levels: Not Supported 00:17:47.421 Endurance Groups: Not Supported 00:17:47.421 Predictable Latency Mode: Not Supported 00:17:47.421 Traffic Based Keep ALive: Not Supported 00:17:47.421 Namespace Granularity: Not Supported 00:17:47.421 SQ Associations: Not Supported 00:17:47.421 UUID List: Not Supported 00:17:47.421 Multi-Domain Subsystem: Not Supported 00:17:47.421 Fixed Capacity Management: Not Supported 00:17:47.421 Variable Capacity Management: Not Supported 00:17:47.421 Delete Endurance Group: Not Supported 00:17:47.421 Delete NVM Set: Not Supported 00:17:47.421 Extended LBA Formats Supported: Not Supported 00:17:47.421 Flexible Data Placement Supported: Not Supported 00:17:47.421 00:17:47.421 Controller Memory Buffer Support 00:17:47.421 ================================ 00:17:47.421 Supported: No 00:17:47.421 00:17:47.421 Persistent Memory Region Support 00:17:47.421 ================================ 00:17:47.421 Supported: No 00:17:47.421 00:17:47.421 Admin Command Set Attributes 00:17:47.421 ============================ 00:17:47.421 Security Send/Receive: Not Supported 00:17:47.421 Format NVM: Not Supported 00:17:47.421 Firmware Activate/Download: Not Supported 00:17:47.421 Namespace Management: Not Supported 00:17:47.421 Device Self-Test: Not Supported 00:17:47.421 Directives: Not Supported 00:17:47.421 NVMe-MI: Not Supported 00:17:47.421 Virtualization Management: Not Supported 00:17:47.421 Doorbell Buffer Config: Not Supported 00:17:47.421 Get LBA Status Capability: Not Supported 00:17:47.421 Command & Feature Lockdown Capability: Not Supported 00:17:47.421 Abort Command Limit: 4 00:17:47.421 Async Event Request Limit: 4 00:17:47.421 Number of Firmware Slots: N/A 00:17:47.421 Firmware Slot 1 Read-Only: N/A 00:17:47.421 Firmware Activation Without Reset: N/A 00:17:47.421 Multiple Update Detection Support: N/A 00:17:47.421 Firmware Update Granularity: No Information Provided 00:17:47.421 Per-Namespace SMART Log: No 00:17:47.421 Asymmetric Namespace Access Log Page: Not Supported 00:17:47.421 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:47.421 Command Effects Log Page: Supported 00:17:47.421 Get Log Page Extended Data: Supported 00:17:47.421 Telemetry Log Pages: Not Supported 00:17:47.421 Persistent Event Log Pages: Not Supported 00:17:47.421 Supported Log Pages Log Page: May Support 00:17:47.421 Commands Supported & Effects Log Page: Not Supported 00:17:47.421 Feature Identifiers & Effects Log Page:May Support 00:17:47.421 NVMe-MI Commands & Effects Log Page: May Support 00:17:47.421 Data Area 4 for Telemetry Log: Not Supported 00:17:47.421 Error Log Page Entries Supported: 128 00:17:47.421 Keep Alive: Supported 00:17:47.421 Keep Alive Granularity: 10000 ms 00:17:47.421 00:17:47.421 NVM Command Set Attributes 00:17:47.421 ========================== 00:17:47.421 Submission Queue Entry Size 00:17:47.421 Max: 64 00:17:47.421 Min: 64 00:17:47.421 Completion Queue Entry Size 00:17:47.421 Max: 16 00:17:47.421 Min: 16 00:17:47.421 Number of Namespaces: 32 00:17:47.421 Compare Command: Supported 00:17:47.421 Write Uncorrectable Command: Not Supported 00:17:47.421 Dataset Management Command: Supported 00:17:47.421 Write Zeroes Command: Supported 00:17:47.421 Set Features Save Field: Not Supported 00:17:47.421 Reservations: Not Supported 00:17:47.421 Timestamp: Not Supported 00:17:47.421 Copy: Supported 00:17:47.421 Volatile Write Cache: Present 00:17:47.421 Atomic Write Unit (Normal): 1 00:17:47.421 Atomic Write Unit (PFail): 1 00:17:47.421 Atomic Compare & Write Unit: 1 00:17:47.421 Fused Compare & Write: Supported 00:17:47.421 Scatter-Gather List 00:17:47.421 SGL Command Set: Supported (Dword aligned) 00:17:47.421 SGL Keyed: Not Supported 00:17:47.421 SGL Bit Bucket Descriptor: Not Supported 00:17:47.421 SGL Metadata Pointer: Not Supported 00:17:47.421 Oversized SGL: Not Supported 00:17:47.421 SGL Metadata Address: Not Supported 00:17:47.421 SGL Offset: Not Supported 00:17:47.421 Transport SGL Data Block: Not Supported 00:17:47.421 Replay Protected Memory Block: Not Supported 00:17:47.421 00:17:47.421 Firmware Slot Information 00:17:47.421 ========================= 00:17:47.421 Active slot: 1 00:17:47.421 Slot 1 Firmware Revision: 25.01 00:17:47.421 00:17:47.421 00:17:47.421 Commands Supported and Effects 00:17:47.421 ============================== 00:17:47.421 Admin Commands 00:17:47.421 -------------- 00:17:47.421 Get Log Page (02h): Supported 00:17:47.421 Identify (06h): Supported 00:17:47.421 Abort (08h): Supported 00:17:47.421 Set Features (09h): Supported 00:17:47.421 Get Features (0Ah): Supported 00:17:47.421 Asynchronous Event Request (0Ch): Supported 00:17:47.421 Keep Alive (18h): Supported 00:17:47.421 I/O Commands 00:17:47.421 ------------ 00:17:47.421 Flush (00h): Supported LBA-Change 00:17:47.421 Write (01h): Supported LBA-Change 00:17:47.421 Read (02h): Supported 00:17:47.421 Compare (05h): Supported 00:17:47.421 Write Zeroes (08h): Supported LBA-Change 00:17:47.421 Dataset Management (09h): Supported LBA-Change 00:17:47.421 Copy (19h): Supported LBA-Change 00:17:47.421 00:17:47.421 Error Log 00:17:47.421 ========= 00:17:47.421 00:17:47.421 Arbitration 00:17:47.421 =========== 00:17:47.421 Arbitration Burst: 1 00:17:47.421 00:17:47.421 Power Management 00:17:47.421 ================ 00:17:47.421 Number of Power States: 1 00:17:47.421 Current Power State: Power State #0 00:17:47.422 Power State #0: 00:17:47.422 Max Power: 0.00 W 00:17:47.422 Non-Operational State: Operational 00:17:47.422 Entry Latency: Not Reported 00:17:47.422 Exit Latency: Not Reported 00:17:47.422 Relative Read Throughput: 0 00:17:47.422 Relative Read Latency: 0 00:17:47.422 Relative Write Throughput: 0 00:17:47.422 Relative Write Latency: 0 00:17:47.422 Idle Power: Not Reported 00:17:47.422 Active Power: Not Reported 00:17:47.422 Non-Operational Permissive Mode: Not Supported 00:17:47.422 00:17:47.422 Health Information 00:17:47.422 ================== 00:17:47.422 Critical Warnings: 00:17:47.422 Available Spare Space: OK 00:17:47.422 Temperature: OK 00:17:47.422 Device Reliability: OK 00:17:47.422 Read Only: No 00:17:47.422 Volatile Memory Backup: OK 00:17:47.422 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:47.422 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:47.422 Available Spare: 0% 00:17:47.422 Available Sp[2024-11-19 23:42:21.526166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:47.422 [2024-11-19 23:42:21.526187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:47.422 [2024-11-19 23:42:21.526230] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:47.422 [2024-11-19 23:42:21.526247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.422 [2024-11-19 23:42:21.526258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.422 [2024-11-19 23:42:21.526268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.422 [2024-11-19 23:42:21.526277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.422 [2024-11-19 23:42:21.529081] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:47.422 [2024-11-19 23:42:21.529102] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:47.422 [2024-11-19 23:42:21.529761] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:47.422 [2024-11-19 23:42:21.529850] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:47.422 [2024-11-19 23:42:21.529864] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:47.422 [2024-11-19 23:42:21.530770] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:47.422 [2024-11-19 23:42:21.530792] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:47.422 [2024-11-19 23:42:21.530844] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:47.422 [2024-11-19 23:42:21.534081] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:47.422 are Threshold: 0% 00:17:47.422 Life Percentage Used: 0% 00:17:47.422 Data Units Read: 0 00:17:47.422 Data Units Written: 0 00:17:47.422 Host Read Commands: 0 00:17:47.422 Host Write Commands: 0 00:17:47.422 Controller Busy Time: 0 minutes 00:17:47.422 Power Cycles: 0 00:17:47.422 Power On Hours: 0 hours 00:17:47.422 Unsafe Shutdowns: 0 00:17:47.422 Unrecoverable Media Errors: 0 00:17:47.422 Lifetime Error Log Entries: 0 00:17:47.422 Warning Temperature Time: 0 minutes 00:17:47.422 Critical Temperature Time: 0 minutes 00:17:47.422 00:17:47.422 Number of Queues 00:17:47.422 ================ 00:17:47.422 Number of I/O Submission Queues: 127 00:17:47.422 Number of I/O Completion Queues: 127 00:17:47.422 00:17:47.422 Active Namespaces 00:17:47.422 ================= 00:17:47.422 Namespace ID:1 00:17:47.422 Error Recovery Timeout: Unlimited 00:17:47.422 Command Set Identifier: NVM (00h) 00:17:47.422 Deallocate: Supported 00:17:47.422 Deallocated/Unwritten Error: Not Supported 00:17:47.422 Deallocated Read Value: Unknown 00:17:47.422 Deallocate in Write Zeroes: Not Supported 00:17:47.422 Deallocated Guard Field: 0xFFFF 00:17:47.422 Flush: Supported 00:17:47.422 Reservation: Supported 00:17:47.422 Namespace Sharing Capabilities: Multiple Controllers 00:17:47.422 Size (in LBAs): 131072 (0GiB) 00:17:47.422 Capacity (in LBAs): 131072 (0GiB) 00:17:47.422 Utilization (in LBAs): 131072 (0GiB) 00:17:47.422 NGUID: 94E678E91A38420DB707B4AC0DDDBEA3 00:17:47.422 UUID: 94e678e9-1a38-420d-b707-b4ac0dddbea3 00:17:47.422 Thin Provisioning: Not Supported 00:17:47.422 Per-NS Atomic Units: Yes 00:17:47.422 Atomic Boundary Size (Normal): 0 00:17:47.422 Atomic Boundary Size (PFail): 0 00:17:47.422 Atomic Boundary Offset: 0 00:17:47.422 Maximum Single Source Range Length: 65535 00:17:47.422 Maximum Copy Length: 65535 00:17:47.422 Maximum Source Range Count: 1 00:17:47.422 NGUID/EUI64 Never Reused: No 00:17:47.422 Namespace Write Protected: No 00:17:47.422 Number of LBA Formats: 1 00:17:47.422 Current LBA Format: LBA Format #00 00:17:47.422 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:47.422 00:17:47.422 23:42:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:47.681 [2024-11-19 23:42:21.773937] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:52.945 Initializing NVMe Controllers 00:17:52.945 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:52.945 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:52.945 Initialization complete. Launching workers. 00:17:52.945 ======================================================== 00:17:52.945 Latency(us) 00:17:52.945 Device Information : IOPS MiB/s Average min max 00:17:52.945 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33522.80 130.95 3819.08 1180.96 8990.92 00:17:52.945 ======================================================== 00:17:52.945 Total : 33522.80 130.95 3819.08 1180.96 8990.92 00:17:52.945 00:17:52.945 [2024-11-19 23:42:26.796429] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:52.945 23:42:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:52.945 [2024-11-19 23:42:27.053662] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:58.207 Initializing NVMe Controllers 00:17:58.207 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:58.207 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:58.207 Initialization complete. Launching workers. 00:17:58.207 ======================================================== 00:17:58.207 Latency(us) 00:17:58.207 Device Information : IOPS MiB/s Average min max 00:17:58.207 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15870.99 62.00 8075.98 4929.90 15992.43 00:17:58.207 ======================================================== 00:17:58.207 Total : 15870.99 62.00 8075.98 4929.90 15992.43 00:17:58.207 00:17:58.207 [2024-11-19 23:42:32.090656] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:58.207 23:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:58.207 [2024-11-19 23:42:32.322754] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:03.472 [2024-11-19 23:42:37.403452] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:03.472 Initializing NVMe Controllers 00:18:03.472 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:03.472 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:03.472 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:03.472 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:03.472 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:03.472 Initialization complete. Launching workers. 00:18:03.472 Starting thread on core 2 00:18:03.472 Starting thread on core 3 00:18:03.472 Starting thread on core 1 00:18:03.472 23:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:03.472 [2024-11-19 23:42:37.730349] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:06.758 [2024-11-19 23:42:40.786095] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:06.758 Initializing NVMe Controllers 00:18:06.758 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:06.758 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:06.758 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:06.758 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:06.758 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:06.758 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:06.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:06.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:06.758 Initialization complete. Launching workers. 00:18:06.758 Starting thread on core 1 with urgent priority queue 00:18:06.758 Starting thread on core 2 with urgent priority queue 00:18:06.758 Starting thread on core 3 with urgent priority queue 00:18:06.758 Starting thread on core 0 with urgent priority queue 00:18:06.758 SPDK bdev Controller (SPDK1 ) core 0: 5941.33 IO/s 16.83 secs/100000 ios 00:18:06.758 SPDK bdev Controller (SPDK1 ) core 1: 5760.33 IO/s 17.36 secs/100000 ios 00:18:06.758 SPDK bdev Controller (SPDK1 ) core 2: 6100.00 IO/s 16.39 secs/100000 ios 00:18:06.758 SPDK bdev Controller (SPDK1 ) core 3: 5698.00 IO/s 17.55 secs/100000 ios 00:18:06.758 ======================================================== 00:18:06.758 00:18:06.758 23:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:07.016 [2024-11-19 23:42:41.103466] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:07.016 Initializing NVMe Controllers 00:18:07.016 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:07.016 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:07.016 Namespace ID: 1 size: 0GB 00:18:07.016 Initialization complete. 00:18:07.016 INFO: using host memory buffer for IO 00:18:07.016 Hello world! 00:18:07.016 [2024-11-19 23:42:41.138067] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:07.016 23:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:07.274 [2024-11-19 23:42:41.458545] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.209 Initializing NVMe Controllers 00:18:08.209 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.209 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:08.209 Initialization complete. Launching workers. 00:18:08.209 submit (in ns) avg, min, max = 7289.3, 3526.7, 4014558.9 00:18:08.209 complete (in ns) avg, min, max = 25686.0, 2083.3, 4019414.4 00:18:08.209 00:18:08.209 Submit histogram 00:18:08.209 ================ 00:18:08.209 Range in us Cumulative Count 00:18:08.209 3.508 - 3.532: 0.0228% ( 3) 00:18:08.209 3.532 - 3.556: 0.9585% ( 123) 00:18:08.209 3.556 - 3.579: 3.1645% ( 290) 00:18:08.209 3.579 - 3.603: 8.8620% ( 749) 00:18:08.209 3.603 - 3.627: 16.4841% ( 1002) 00:18:08.209 3.627 - 3.650: 28.2748% ( 1550) 00:18:08.209 3.650 - 3.674: 37.5171% ( 1215) 00:18:08.209 3.674 - 3.698: 45.6793% ( 1073) 00:18:08.209 3.698 - 3.721: 52.0995% ( 844) 00:18:08.209 3.721 - 3.745: 57.9416% ( 768) 00:18:08.209 3.745 - 3.769: 62.7263% ( 629) 00:18:08.209 3.769 - 3.793: 66.9938% ( 561) 00:18:08.209 3.793 - 3.816: 70.5462% ( 467) 00:18:08.209 3.816 - 3.840: 73.8552% ( 435) 00:18:08.209 3.840 - 3.864: 77.3391% ( 458) 00:18:08.209 3.864 - 3.887: 80.6177% ( 431) 00:18:08.209 3.887 - 3.911: 83.9343% ( 436) 00:18:08.209 3.911 - 3.935: 86.5130% ( 339) 00:18:08.209 3.935 - 3.959: 88.3006% ( 235) 00:18:08.209 3.959 - 3.982: 89.9589% ( 218) 00:18:08.209 3.982 - 4.006: 91.6324% ( 220) 00:18:08.209 4.006 - 4.030: 92.9636% ( 175) 00:18:08.209 4.030 - 4.053: 94.1807% ( 160) 00:18:08.209 4.053 - 4.077: 94.9719% ( 104) 00:18:08.209 4.077 - 4.101: 95.5348% ( 74) 00:18:08.209 4.101 - 4.124: 95.9455% ( 54) 00:18:08.209 4.124 - 4.148: 96.2194% ( 36) 00:18:08.209 4.148 - 4.172: 96.4324% ( 28) 00:18:08.209 4.172 - 4.196: 96.5921% ( 21) 00:18:08.209 4.196 - 4.219: 96.6530% ( 8) 00:18:08.209 4.219 - 4.243: 96.7138% ( 8) 00:18:08.209 4.243 - 4.267: 96.8051% ( 12) 00:18:08.209 4.267 - 4.290: 96.8584% ( 7) 00:18:08.209 4.290 - 4.314: 96.9496% ( 12) 00:18:08.209 4.314 - 4.338: 97.0257% ( 10) 00:18:08.209 4.338 - 4.361: 97.0942% ( 9) 00:18:08.209 4.361 - 4.385: 97.1398% ( 6) 00:18:08.209 4.385 - 4.409: 97.1778% ( 5) 00:18:08.209 4.409 - 4.433: 97.2083% ( 4) 00:18:08.209 4.433 - 4.456: 97.2235% ( 2) 00:18:08.209 4.480 - 4.504: 97.2311% ( 1) 00:18:08.209 4.504 - 4.527: 97.2463% ( 2) 00:18:08.209 4.527 - 4.551: 97.2539% ( 1) 00:18:08.209 4.551 - 4.575: 97.2843% ( 4) 00:18:08.209 4.575 - 4.599: 97.3148% ( 4) 00:18:08.209 4.599 - 4.622: 97.3528% ( 5) 00:18:08.209 4.622 - 4.646: 97.3680% ( 2) 00:18:08.209 4.646 - 4.670: 97.4137% ( 6) 00:18:08.209 4.670 - 4.693: 97.4745% ( 8) 00:18:08.209 4.693 - 4.717: 97.5049% ( 4) 00:18:08.209 4.717 - 4.741: 97.5734% ( 9) 00:18:08.209 4.741 - 4.764: 97.6267% ( 7) 00:18:08.209 4.764 - 4.788: 97.6875% ( 8) 00:18:08.209 4.788 - 4.812: 97.7179% ( 4) 00:18:08.209 4.812 - 4.836: 97.7712% ( 7) 00:18:08.209 4.836 - 4.859: 97.8396% ( 9) 00:18:08.209 4.859 - 4.883: 97.9157% ( 10) 00:18:08.209 4.883 - 4.907: 97.9461% ( 4) 00:18:08.209 4.907 - 4.930: 97.9690% ( 3) 00:18:08.209 4.930 - 4.954: 98.0374% ( 9) 00:18:08.209 4.954 - 4.978: 98.0602% ( 3) 00:18:08.209 4.978 - 5.001: 98.0755% ( 2) 00:18:08.209 5.001 - 5.025: 98.1135% ( 5) 00:18:08.209 5.073 - 5.096: 98.1515% ( 5) 00:18:08.209 5.096 - 5.120: 98.1896% ( 5) 00:18:08.209 5.120 - 5.144: 98.2048% ( 2) 00:18:08.209 5.144 - 5.167: 98.2200% ( 2) 00:18:08.209 5.167 - 5.191: 98.2504% ( 4) 00:18:08.209 5.191 - 5.215: 98.2656% ( 2) 00:18:08.209 5.215 - 5.239: 98.2732% ( 1) 00:18:08.209 5.239 - 5.262: 98.2961% ( 3) 00:18:08.209 5.428 - 5.452: 98.3037% ( 1) 00:18:08.209 5.452 - 5.476: 98.3113% ( 1) 00:18:08.210 5.476 - 5.499: 98.3189% ( 1) 00:18:08.210 5.499 - 5.523: 98.3341% ( 2) 00:18:08.210 5.547 - 5.570: 98.3417% ( 1) 00:18:08.210 5.570 - 5.594: 98.3493% ( 1) 00:18:08.210 5.594 - 5.618: 98.3569% ( 1) 00:18:08.210 5.641 - 5.665: 98.3721% ( 2) 00:18:08.210 5.736 - 5.760: 98.3797% ( 1) 00:18:08.210 5.831 - 5.855: 98.3873% ( 1) 00:18:08.210 5.950 - 5.973: 98.3949% ( 1) 00:18:08.210 6.684 - 6.732: 98.4026% ( 1) 00:18:08.210 6.779 - 6.827: 98.4102% ( 1) 00:18:08.210 7.064 - 7.111: 98.4178% ( 1) 00:18:08.210 7.159 - 7.206: 98.4254% ( 1) 00:18:08.210 7.253 - 7.301: 98.4330% ( 1) 00:18:08.210 7.396 - 7.443: 98.4406% ( 1) 00:18:08.210 7.443 - 7.490: 98.4482% ( 1) 00:18:08.210 7.490 - 7.538: 98.4558% ( 1) 00:18:08.210 7.538 - 7.585: 98.4634% ( 1) 00:18:08.210 7.585 - 7.633: 98.4786% ( 2) 00:18:08.210 7.775 - 7.822: 98.4862% ( 1) 00:18:08.210 7.917 - 7.964: 98.4938% ( 1) 00:18:08.210 7.964 - 8.012: 98.5014% ( 1) 00:18:08.210 8.059 - 8.107: 98.5091% ( 1) 00:18:08.210 8.107 - 8.154: 98.5167% ( 1) 00:18:08.210 8.154 - 8.201: 98.5395% ( 3) 00:18:08.210 8.201 - 8.249: 98.5547% ( 2) 00:18:08.210 8.249 - 8.296: 98.5623% ( 1) 00:18:08.210 8.296 - 8.344: 98.5851% ( 3) 00:18:08.210 8.391 - 8.439: 98.6079% ( 3) 00:18:08.210 8.439 - 8.486: 98.6232% ( 2) 00:18:08.210 8.533 - 8.581: 98.6308% ( 1) 00:18:08.210 8.581 - 8.628: 98.6460% ( 2) 00:18:08.210 8.628 - 8.676: 98.6688% ( 3) 00:18:08.210 8.676 - 8.723: 98.6916% ( 3) 00:18:08.210 8.770 - 8.818: 98.7068% ( 2) 00:18:08.210 9.007 - 9.055: 98.7144% ( 1) 00:18:08.210 9.150 - 9.197: 98.7220% ( 1) 00:18:08.210 9.197 - 9.244: 98.7449% ( 3) 00:18:08.210 9.292 - 9.339: 98.7525% ( 1) 00:18:08.210 9.387 - 9.434: 98.7601% ( 1) 00:18:08.210 9.434 - 9.481: 98.7677% ( 1) 00:18:08.210 9.481 - 9.529: 98.7753% ( 1) 00:18:08.210 9.529 - 9.576: 98.7829% ( 1) 00:18:08.210 9.576 - 9.624: 98.7981% ( 2) 00:18:08.210 9.671 - 9.719: 98.8057% ( 1) 00:18:08.210 9.719 - 9.766: 98.8209% ( 2) 00:18:08.210 9.766 - 9.813: 98.8285% ( 1) 00:18:08.210 9.956 - 10.003: 98.8361% ( 1) 00:18:08.210 10.098 - 10.145: 98.8438% ( 1) 00:18:08.210 10.145 - 10.193: 98.8514% ( 1) 00:18:08.210 10.240 - 10.287: 98.8666% ( 2) 00:18:08.210 10.287 - 10.335: 98.8742% ( 1) 00:18:08.210 10.619 - 10.667: 98.8818% ( 1) 00:18:08.210 10.714 - 10.761: 98.8894% ( 1) 00:18:08.210 11.093 - 11.141: 98.8970% ( 1) 00:18:08.210 11.378 - 11.425: 98.9046% ( 1) 00:18:08.210 11.473 - 11.520: 98.9198% ( 2) 00:18:08.210 11.757 - 11.804: 98.9274% ( 1) 00:18:08.210 11.947 - 11.994: 98.9350% ( 1) 00:18:08.210 11.994 - 12.041: 98.9503% ( 2) 00:18:08.210 12.231 - 12.326: 98.9579% ( 1) 00:18:08.210 12.800 - 12.895: 98.9655% ( 1) 00:18:08.210 13.179 - 13.274: 98.9731% ( 1) 00:18:08.210 13.274 - 13.369: 98.9959% ( 3) 00:18:08.210 13.369 - 13.464: 99.0035% ( 1) 00:18:08.210 14.317 - 14.412: 99.0111% ( 1) 00:18:08.210 14.507 - 14.601: 99.0187% ( 1) 00:18:08.210 16.877 - 16.972: 99.0263% ( 1) 00:18:08.210 17.161 - 17.256: 99.0491% ( 3) 00:18:08.210 17.256 - 17.351: 99.0567% ( 1) 00:18:08.210 17.351 - 17.446: 99.0948% ( 5) 00:18:08.210 17.446 - 17.541: 99.1100% ( 2) 00:18:08.210 17.541 - 17.636: 99.1709% ( 8) 00:18:08.210 17.636 - 17.730: 99.2393% ( 9) 00:18:08.210 17.730 - 17.825: 99.2926% ( 7) 00:18:08.210 17.825 - 17.920: 99.3306% ( 5) 00:18:08.210 17.920 - 18.015: 99.3914% ( 8) 00:18:08.210 18.015 - 18.110: 99.4523% ( 8) 00:18:08.210 18.110 - 18.204: 99.4979% ( 6) 00:18:08.210 18.204 - 18.299: 99.5664% ( 9) 00:18:08.210 18.299 - 18.394: 99.6349% ( 9) 00:18:08.210 18.394 - 18.489: 99.6957% ( 8) 00:18:08.210 18.489 - 18.584: 99.7338% ( 5) 00:18:08.210 18.584 - 18.679: 99.7794% ( 6) 00:18:08.210 18.679 - 18.773: 99.8022% ( 3) 00:18:08.210 18.773 - 18.868: 99.8174% ( 2) 00:18:08.210 18.868 - 18.963: 99.8479% ( 4) 00:18:08.210 19.058 - 19.153: 99.8555% ( 1) 00:18:08.210 19.437 - 19.532: 99.8631% ( 1) 00:18:08.210 20.196 - 20.290: 99.8707% ( 1) 00:18:08.210 23.988 - 24.083: 99.8783% ( 1) 00:18:08.210 24.273 - 24.462: 99.8859% ( 1) 00:18:08.210 27.686 - 27.876: 99.8935% ( 1) 00:18:08.210 30.151 - 30.341: 99.9011% ( 1) 00:18:08.210 48.924 - 49.304: 99.9087% ( 1) 00:18:08.210 165.357 - 166.116: 99.9163% ( 1) 00:18:08.210 3980.705 - 4004.978: 99.9848% ( 9) 00:18:08.210 4004.978 - 4029.250: 100.0000% ( 2) 00:18:08.210 00:18:08.210 Complete histogram 00:18:08.210 ================== 00:18:08.210 Range in us Cumulative Count 00:18:08.210 2.074 - 2.086: 0.0076% ( 1) 00:18:08.210 2.086 - 2.098: 1.5214% ( 199) 00:18:08.210 2.098 - 2.110: 12.8252% ( 1486) 00:18:08.210 2.110 - 2.121: 46.6758% ( 4450) 00:18:08.210 2.121 - 2.133: 52.3277% ( 743) 00:18:08.210 2.133 - 2.145: 57.4015% ( 667) 00:18:08.210 2.145 - 2.157: 61.6613% ( 560) 00:18:08.210 2.157 - 2.169: 63.7989% ( 281) 00:18:08.210 2.169 - 2.181: 72.2273% ( 1108) 00:18:08.210 2.181 - 2.193: 82.7096% ( 1378) 00:18:08.210 2.193 - 2.204: 84.4744% ( 232) 00:18:08.210 2.204 - 2.216: 86.8477% ( 312) 00:18:08.210 2.216 - 2.228: 88.4604% ( 212) 00:18:08.210 2.228 - 2.240: 89.9589% ( 197) 00:18:08.210 2.240 - 2.252: 92.3475% ( 314) 00:18:08.210 2.252 - 2.264: 93.8993% ( 204) 00:18:08.210 2.264 - 2.276: 94.1883% ( 38) 00:18:08.210 2.276 - 2.287: 94.5078% ( 42) 00:18:08.210 2.287 - 2.299: 94.8578% ( 46) 00:18:08.210 2.299 - 2.311: 95.1772% ( 42) 00:18:08.210 2.311 - 2.323: 95.6717% ( 65) 00:18:08.210 2.323 - 2.335: 95.8314% ( 21) 00:18:08.210 2.335 - 2.347: 95.9075% ( 10) 00:18:08.210 2.347 - 2.359: 95.9607% ( 7) 00:18:08.210 2.359 - 2.370: 96.1357% ( 23) 00:18:08.210 2.370 - 2.382: 96.4019% ( 35) 00:18:08.210 2.382 - 2.394: 96.8051% ( 53) 00:18:08.210 2.394 - 2.406: 97.1398% ( 44) 00:18:08.210 2.406 - 2.418: 97.3452% ( 27) 00:18:08.210 2.418 - 2.430: 97.4897% ( 19) 00:18:08.210 2.430 - 2.441: 97.6723% ( 24) 00:18:08.210 2.441 - 2.453: 97.7712% ( 13) 00:18:08.210 2.453 - 2.465: 97.9157% ( 19) 00:18:08.210 2.465 - 2.477: 97.9690% ( 7) 00:18:08.210 2.477 - 2.489: 98.0070% ( 5) 00:18:08.210 2.489 - 2.501: 98.0450% ( 5) 00:18:08.210 2.501 - 2.513: 98.0983% ( 7) 00:18:08.210 2.513 - 2.524: 98.1287% ( 4) 00:18:08.210 2.524 - 2.536: 98.1363% ( 1) 00:18:08.210 2.536 - 2.548: 98.1591% ( 3) 00:18:08.210 2.548 - 2.560: 98.1820% ( 3) 00:18:08.210 2.560 - 2.572: 98.2048% ( 3) 00:18:08.210 2.572 - 2.584: 98.2124% ( 1) 00:18:08.210 2.584 - 2.596: 98.2276% ( 2) 00:18:08.210 2.596 - 2.607: 98.2352% ( 1) 00:18:08.210 2.607 - 2.619: 98.2504% ( 2) 00:18:08.210 2.643 - 2.655: 98.2580% ( 1) 00:18:08.210 2.655 - 2.667: 98.2732% ( 2) 00:18:08.210 2.667 - 2.679: 98.2885% ( 2) 00:18:08.211 2.679 - 2.690: 98.2961% ( 1) 00:18:08.211 2.690 - 2.702: 98.3037% ( 1) 00:18:08.211 2.797 - 2.809: 98.3189% ( 2) 00:18:08.211 2.844 - 2.856: 98.3265% ( 1) 00:18:08.211 2.880 - 2.892: 98.3341% ( 1) 00:18:08.211 2.892 - 2.904: 98.3417% ( 1) 00:18:08.211 2.916 - 2.927: 98.3569% ( 2) 00:18:08.211 2.963 - 2.975: 98.3721% ( 2) 00:18:08.211 2.975 - 2.987: 98.3797% ( 1) 00:18:08.211 3.010 - 3.022: 98.3873% ( 1) 00:18:08.211 3.022 - 3.034: 98.3949% ( 1) 00:18:08.211 3.034 - 3.058: 98.4178% ( 3) 00:18:08.211 3.058 - 3.081: 98.4254% ( 1) 00:18:08.211 3.105 - 3.129: 98.4330% ( 1) 00:18:08.211 3.153 - 3.176: 98.4558% ( 3) 00:18:08.211 3.176 - 3.200: 98.4634% ( 1) 00:18:08.211 3.200 - 3.224: 98.4786% ( 2) 00:18:08.211 3.224 - 3.247: 98.4938% ( 2) 00:18:08.211 3.247 - 3.271: 98.5014% ( 1) 00:18:08.211 3.271 - 3.295: 98.5091% ( 1) 00:18:08.211 3.319 - 3.342: 98.5167% ( 1) 00:18:08.211 3.390 - 3.413: 98.5319% ( 2) 00:18:08.211 3.413 - 3.437: 98.5395% ( 1) 00:18:08.211 3.437 - 3.461: 98.5547% ( 2) 00:18:08.211 3.461 - 3.484: 98.5775% ( 3) 00:18:08.211 3.484 - 3.508: 98.5927% ( 2) 00:18:08.211 3.532 - 3.556: 98.6003% ( 1) 00:18:08.211 3.556 - 3.579: 98.6155% ( 2) 00:18:08.211 3.603 - 3.627: 98.6308% ( 2) 00:18:08.211 3.627 - 3.650: 98.6384% ( 1) 00:18:08.211 3.650 - 3.674: 98.6612% ( 3) 00:18:08.211 3.698 - 3.721: 9[2024-11-19 23:42:42.479507] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:08.469 8.6688% ( 1) 00:18:08.469 3.745 - 3.769: 98.6764% ( 1) 00:18:08.469 3.793 - 3.816: 98.6840% ( 1) 00:18:08.469 3.816 - 3.840: 98.6992% ( 2) 00:18:08.469 3.864 - 3.887: 98.7068% ( 1) 00:18:08.469 3.887 - 3.911: 98.7220% ( 2) 00:18:08.469 3.935 - 3.959: 98.7297% ( 1) 00:18:08.469 3.959 - 3.982: 98.7449% ( 2) 00:18:08.469 4.006 - 4.030: 98.7525% ( 1) 00:18:08.469 4.053 - 4.077: 98.7753% ( 3) 00:18:08.469 4.361 - 4.385: 98.7829% ( 1) 00:18:08.469 4.456 - 4.480: 98.7905% ( 1) 00:18:08.469 4.527 - 4.551: 98.7981% ( 1) 00:18:08.469 5.523 - 5.547: 98.8057% ( 1) 00:18:08.469 5.547 - 5.570: 98.8133% ( 1) 00:18:08.469 5.855 - 5.879: 98.8209% ( 1) 00:18:08.469 5.950 - 5.973: 98.8285% ( 1) 00:18:08.469 6.163 - 6.210: 98.8361% ( 1) 00:18:08.469 6.305 - 6.353: 98.8438% ( 1) 00:18:08.469 6.447 - 6.495: 98.8666% ( 3) 00:18:08.469 6.827 - 6.874: 98.8818% ( 2) 00:18:08.469 7.348 - 7.396: 98.8894% ( 1) 00:18:08.469 7.727 - 7.775: 98.8970% ( 1) 00:18:08.469 7.964 - 8.012: 98.9046% ( 1) 00:18:08.469 8.201 - 8.249: 98.9122% ( 1) 00:18:08.470 8.723 - 8.770: 98.9198% ( 1) 00:18:08.470 9.387 - 9.434: 98.9274% ( 1) 00:18:08.470 12.516 - 12.610: 98.9350% ( 1) 00:18:08.470 15.739 - 15.834: 98.9579% ( 3) 00:18:08.470 15.834 - 15.929: 98.9655% ( 1) 00:18:08.470 15.929 - 16.024: 98.9883% ( 3) 00:18:08.470 16.024 - 16.119: 99.0035% ( 2) 00:18:08.470 16.119 - 16.213: 99.0339% ( 4) 00:18:08.470 16.213 - 16.308: 99.0872% ( 7) 00:18:08.470 16.308 - 16.403: 99.1176% ( 4) 00:18:08.470 16.403 - 16.498: 99.1252% ( 1) 00:18:08.470 16.498 - 16.593: 99.1328% ( 1) 00:18:08.470 16.593 - 16.687: 99.1632% ( 4) 00:18:08.470 16.687 - 16.782: 99.1937% ( 4) 00:18:08.470 16.782 - 16.877: 99.2545% ( 8) 00:18:08.470 16.877 - 16.972: 99.2926% ( 5) 00:18:08.470 16.972 - 17.067: 99.3154% ( 3) 00:18:08.470 17.067 - 17.161: 99.3230% ( 1) 00:18:08.470 17.161 - 17.256: 99.3306% ( 1) 00:18:08.470 17.351 - 17.446: 99.3458% ( 2) 00:18:08.470 17.636 - 17.730: 99.3534% ( 1) 00:18:08.470 17.825 - 17.920: 99.3610% ( 1) 00:18:08.470 18.015 - 18.110: 99.3686% ( 1) 00:18:08.470 18.110 - 18.204: 99.3838% ( 2) 00:18:08.470 18.204 - 18.299: 99.3914% ( 1) 00:18:08.470 18.394 - 18.489: 99.3991% ( 1) 00:18:08.470 42.667 - 42.856: 99.4067% ( 1) 00:18:08.470 193.422 - 194.181: 99.4143% ( 1) 00:18:08.470 3980.705 - 4004.978: 99.9087% ( 65) 00:18:08.470 4004.978 - 4029.250: 100.0000% ( 12) 00:18:08.470 00:18:08.470 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:08.470 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:08.470 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:08.470 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:08.470 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:08.728 [ 00:18:08.728 { 00:18:08.728 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:08.728 "subtype": "Discovery", 00:18:08.728 "listen_addresses": [], 00:18:08.728 "allow_any_host": true, 00:18:08.728 "hosts": [] 00:18:08.728 }, 00:18:08.728 { 00:18:08.728 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:08.728 "subtype": "NVMe", 00:18:08.728 "listen_addresses": [ 00:18:08.728 { 00:18:08.728 "trtype": "VFIOUSER", 00:18:08.728 "adrfam": "IPv4", 00:18:08.728 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:08.728 "trsvcid": "0" 00:18:08.728 } 00:18:08.728 ], 00:18:08.728 "allow_any_host": true, 00:18:08.728 "hosts": [], 00:18:08.728 "serial_number": "SPDK1", 00:18:08.728 "model_number": "SPDK bdev Controller", 00:18:08.728 "max_namespaces": 32, 00:18:08.728 "min_cntlid": 1, 00:18:08.728 "max_cntlid": 65519, 00:18:08.728 "namespaces": [ 00:18:08.728 { 00:18:08.728 "nsid": 1, 00:18:08.728 "bdev_name": "Malloc1", 00:18:08.728 "name": "Malloc1", 00:18:08.728 "nguid": "94E678E91A38420DB707B4AC0DDDBEA3", 00:18:08.728 "uuid": "94e678e9-1a38-420d-b707-b4ac0dddbea3" 00:18:08.728 } 00:18:08.728 ] 00:18:08.728 }, 00:18:08.728 { 00:18:08.728 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:08.728 "subtype": "NVMe", 00:18:08.728 "listen_addresses": [ 00:18:08.728 { 00:18:08.728 "trtype": "VFIOUSER", 00:18:08.728 "adrfam": "IPv4", 00:18:08.728 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:08.728 "trsvcid": "0" 00:18:08.728 } 00:18:08.728 ], 00:18:08.728 "allow_any_host": true, 00:18:08.728 "hosts": [], 00:18:08.728 "serial_number": "SPDK2", 00:18:08.728 "model_number": "SPDK bdev Controller", 00:18:08.728 "max_namespaces": 32, 00:18:08.728 "min_cntlid": 1, 00:18:08.728 "max_cntlid": 65519, 00:18:08.728 "namespaces": [ 00:18:08.728 { 00:18:08.728 "nsid": 1, 00:18:08.728 "bdev_name": "Malloc2", 00:18:08.728 "name": "Malloc2", 00:18:08.728 "nguid": "69518A009AEF4C529C6E22F8C452BEC5", 00:18:08.728 "uuid": "69518a00-9aef-4c52-9c6e-22f8c452bec5" 00:18:08.728 } 00:18:08.728 ] 00:18:08.728 } 00:18:08.728 ] 00:18:08.728 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:08.728 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=169523 00:18:08.728 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:08.728 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:08.728 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:08.728 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:08.728 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:08.728 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:08.728 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:08.728 23:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:08.728 [2024-11-19 23:42:42.972560] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:08.987 Malloc3 00:18:08.987 23:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:09.244 [2024-11-19 23:42:43.367392] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:09.244 23:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:09.245 Asynchronous Event Request test 00:18:09.245 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:09.245 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:09.245 Registering asynchronous event callbacks... 00:18:09.245 Starting namespace attribute notice tests for all controllers... 00:18:09.245 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:09.245 aer_cb - Changed Namespace 00:18:09.245 Cleaning up... 00:18:09.504 [ 00:18:09.504 { 00:18:09.504 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:09.504 "subtype": "Discovery", 00:18:09.504 "listen_addresses": [], 00:18:09.504 "allow_any_host": true, 00:18:09.504 "hosts": [] 00:18:09.504 }, 00:18:09.504 { 00:18:09.504 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:09.504 "subtype": "NVMe", 00:18:09.504 "listen_addresses": [ 00:18:09.504 { 00:18:09.504 "trtype": "VFIOUSER", 00:18:09.504 "adrfam": "IPv4", 00:18:09.504 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:09.504 "trsvcid": "0" 00:18:09.504 } 00:18:09.504 ], 00:18:09.504 "allow_any_host": true, 00:18:09.504 "hosts": [], 00:18:09.504 "serial_number": "SPDK1", 00:18:09.504 "model_number": "SPDK bdev Controller", 00:18:09.504 "max_namespaces": 32, 00:18:09.504 "min_cntlid": 1, 00:18:09.504 "max_cntlid": 65519, 00:18:09.504 "namespaces": [ 00:18:09.504 { 00:18:09.504 "nsid": 1, 00:18:09.504 "bdev_name": "Malloc1", 00:18:09.504 "name": "Malloc1", 00:18:09.504 "nguid": "94E678E91A38420DB707B4AC0DDDBEA3", 00:18:09.504 "uuid": "94e678e9-1a38-420d-b707-b4ac0dddbea3" 00:18:09.504 }, 00:18:09.504 { 00:18:09.504 "nsid": 2, 00:18:09.504 "bdev_name": "Malloc3", 00:18:09.504 "name": "Malloc3", 00:18:09.504 "nguid": "FDC5C2ED9F7646AEB600B184B35467D9", 00:18:09.504 "uuid": "fdc5c2ed-9f76-46ae-b600-b184b35467d9" 00:18:09.504 } 00:18:09.504 ] 00:18:09.504 }, 00:18:09.504 { 00:18:09.504 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:09.504 "subtype": "NVMe", 00:18:09.504 "listen_addresses": [ 00:18:09.504 { 00:18:09.504 "trtype": "VFIOUSER", 00:18:09.504 "adrfam": "IPv4", 00:18:09.504 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:09.504 "trsvcid": "0" 00:18:09.504 } 00:18:09.504 ], 00:18:09.504 "allow_any_host": true, 00:18:09.504 "hosts": [], 00:18:09.504 "serial_number": "SPDK2", 00:18:09.504 "model_number": "SPDK bdev Controller", 00:18:09.504 "max_namespaces": 32, 00:18:09.504 "min_cntlid": 1, 00:18:09.504 "max_cntlid": 65519, 00:18:09.504 "namespaces": [ 00:18:09.504 { 00:18:09.504 "nsid": 1, 00:18:09.504 "bdev_name": "Malloc2", 00:18:09.504 "name": "Malloc2", 00:18:09.504 "nguid": "69518A009AEF4C529C6E22F8C452BEC5", 00:18:09.504 "uuid": "69518a00-9aef-4c52-9c6e-22f8c452bec5" 00:18:09.504 } 00:18:09.504 ] 00:18:09.504 } 00:18:09.504 ] 00:18:09.504 23:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 169523 00:18:09.504 23:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:09.504 23:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:09.504 23:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:09.504 23:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:09.504 [2024-11-19 23:42:43.669003] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:18:09.504 [2024-11-19 23:42:43.669046] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169658 ] 00:18:09.504 [2024-11-19 23:42:43.726220] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:09.505 [2024-11-19 23:42:43.728589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:09.505 [2024-11-19 23:42:43.728618] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8a9e758000 00:18:09.505 [2024-11-19 23:42:43.729595] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.505 [2024-11-19 23:42:43.730602] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.505 [2024-11-19 23:42:43.731603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.505 [2024-11-19 23:42:43.732609] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:09.505 [2024-11-19 23:42:43.733616] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:09.505 [2024-11-19 23:42:43.734621] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.505 [2024-11-19 23:42:43.735628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:09.505 [2024-11-19 23:42:43.736636] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.505 [2024-11-19 23:42:43.737641] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:09.505 [2024-11-19 23:42:43.737663] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8a9d450000 00:18:09.505 [2024-11-19 23:42:43.738786] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:09.505 [2024-11-19 23:42:43.756579] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:09.505 [2024-11-19 23:42:43.756614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:09.505 [2024-11-19 23:42:43.758691] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:09.505 [2024-11-19 23:42:43.758749] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:09.505 [2024-11-19 23:42:43.758837] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:09.505 [2024-11-19 23:42:43.758859] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:09.505 [2024-11-19 23:42:43.758869] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:09.505 [2024-11-19 23:42:43.759696] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:09.505 [2024-11-19 23:42:43.759716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:09.505 [2024-11-19 23:42:43.759729] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:09.505 [2024-11-19 23:42:43.760703] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:09.505 [2024-11-19 23:42:43.760723] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:09.505 [2024-11-19 23:42:43.760736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:09.505 [2024-11-19 23:42:43.761712] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:09.505 [2024-11-19 23:42:43.761732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:09.505 [2024-11-19 23:42:43.762714] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:09.505 [2024-11-19 23:42:43.762733] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:09.505 [2024-11-19 23:42:43.762742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:09.505 [2024-11-19 23:42:43.762753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:09.505 [2024-11-19 23:42:43.762862] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:09.505 [2024-11-19 23:42:43.762870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:09.505 [2024-11-19 23:42:43.762878] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:09.505 [2024-11-19 23:42:43.763727] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:09.505 [2024-11-19 23:42:43.764733] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:09.505 [2024-11-19 23:42:43.765754] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:09.505 [2024-11-19 23:42:43.766744] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:09.505 [2024-11-19 23:42:43.766812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:09.505 [2024-11-19 23:42:43.767768] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:09.505 [2024-11-19 23:42:43.767792] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:09.505 [2024-11-19 23:42:43.767801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:09.505 [2024-11-19 23:42:43.767824] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:09.505 [2024-11-19 23:42:43.767837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:09.505 [2024-11-19 23:42:43.767857] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:09.505 [2024-11-19 23:42:43.767866] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.505 [2024-11-19 23:42:43.767872] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.505 [2024-11-19 23:42:43.767888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.505 [2024-11-19 23:42:43.774086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:09.505 [2024-11-19 23:42:43.774109] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:09.505 [2024-11-19 23:42:43.774118] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:09.505 [2024-11-19 23:42:43.774125] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:09.505 [2024-11-19 23:42:43.774133] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:09.505 [2024-11-19 23:42:43.774145] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:09.505 [2024-11-19 23:42:43.774154] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:09.505 [2024-11-19 23:42:43.774162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:09.505 [2024-11-19 23:42:43.774178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:09.505 [2024-11-19 23:42:43.774194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:09.505 [2024-11-19 23:42:43.782092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:09.505 [2024-11-19 23:42:43.782116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.505 [2024-11-19 23:42:43.782130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.505 [2024-11-19 23:42:43.782142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.505 [2024-11-19 23:42:43.782154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.505 [2024-11-19 23:42:43.782163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:09.505 [2024-11-19 23:42:43.782175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:09.505 [2024-11-19 23:42:43.782193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:09.505 [2024-11-19 23:42:43.790080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:09.505 [2024-11-19 23:42:43.790102] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:09.506 [2024-11-19 23:42:43.790113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:09.506 [2024-11-19 23:42:43.790125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:09.506 [2024-11-19 23:42:43.790135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:09.506 [2024-11-19 23:42:43.790149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:09.506 [2024-11-19 23:42:43.798079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:09.506 [2024-11-19 23:42:43.798156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:09.506 [2024-11-19 23:42:43.798173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:09.506 [2024-11-19 23:42:43.798186] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:09.506 [2024-11-19 23:42:43.798194] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:09.506 [2024-11-19 23:42:43.798200] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.506 [2024-11-19 23:42:43.798210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:09.506 [2024-11-19 23:42:43.806081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:09.506 [2024-11-19 23:42:43.806103] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:09.506 [2024-11-19 23:42:43.806123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:09.506 [2024-11-19 23:42:43.806138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:09.506 [2024-11-19 23:42:43.806151] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:09.506 [2024-11-19 23:42:43.806159] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.506 [2024-11-19 23:42:43.806164] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.506 [2024-11-19 23:42:43.806174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.766 [2024-11-19 23:42:43.814086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:09.766 [2024-11-19 23:42:43.814118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:09.766 [2024-11-19 23:42:43.814137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:09.766 [2024-11-19 23:42:43.814151] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:09.766 [2024-11-19 23:42:43.814163] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.766 [2024-11-19 23:42:43.814170] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.766 [2024-11-19 23:42:43.814180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.766 [2024-11-19 23:42:43.822080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:09.766 [2024-11-19 23:42:43.822105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:09.766 [2024-11-19 23:42:43.822119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:09.766 [2024-11-19 23:42:43.822135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:09.766 [2024-11-19 23:42:43.822146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:09.766 [2024-11-19 23:42:43.822154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:09.766 [2024-11-19 23:42:43.822162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:09.766 [2024-11-19 23:42:43.822170] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:09.766 [2024-11-19 23:42:43.822178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:09.766 [2024-11-19 23:42:43.822186] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:09.766 [2024-11-19 23:42:43.822211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:09.766 [2024-11-19 23:42:43.830081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:09.766 [2024-11-19 23:42:43.830110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:09.766 [2024-11-19 23:42:43.837110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:09.766 [2024-11-19 23:42:43.837136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:09.766 [2024-11-19 23:42:43.846080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:09.766 [2024-11-19 23:42:43.846106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:09.766 [2024-11-19 23:42:43.854094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:09.766 [2024-11-19 23:42:43.854126] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:09.766 [2024-11-19 23:42:43.854138] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:09.766 [2024-11-19 23:42:43.854144] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:09.766 [2024-11-19 23:42:43.854150] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:09.766 [2024-11-19 23:42:43.854156] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:09.766 [2024-11-19 23:42:43.854165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:09.766 [2024-11-19 23:42:43.854181] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:09.766 [2024-11-19 23:42:43.854189] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:09.766 [2024-11-19 23:42:43.854195] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.766 [2024-11-19 23:42:43.854204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:09.766 [2024-11-19 23:42:43.854215] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:09.766 [2024-11-19 23:42:43.854223] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.766 [2024-11-19 23:42:43.854228] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.766 [2024-11-19 23:42:43.854237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.766 [2024-11-19 23:42:43.854249] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:09.766 [2024-11-19 23:42:43.854256] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:09.766 [2024-11-19 23:42:43.854262] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.766 [2024-11-19 23:42:43.854270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:09.766 [2024-11-19 23:42:43.862083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:09.766 [2024-11-19 23:42:43.862111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:09.766 [2024-11-19 23:42:43.862129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:09.766 [2024-11-19 23:42:43.862141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:09.766 ===================================================== 00:18:09.766 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:09.766 ===================================================== 00:18:09.766 Controller Capabilities/Features 00:18:09.766 ================================ 00:18:09.766 Vendor ID: 4e58 00:18:09.766 Subsystem Vendor ID: 4e58 00:18:09.766 Serial Number: SPDK2 00:18:09.766 Model Number: SPDK bdev Controller 00:18:09.766 Firmware Version: 25.01 00:18:09.766 Recommended Arb Burst: 6 00:18:09.766 IEEE OUI Identifier: 8d 6b 50 00:18:09.766 Multi-path I/O 00:18:09.766 May have multiple subsystem ports: Yes 00:18:09.766 May have multiple controllers: Yes 00:18:09.766 Associated with SR-IOV VF: No 00:18:09.766 Max Data Transfer Size: 131072 00:18:09.766 Max Number of Namespaces: 32 00:18:09.766 Max Number of I/O Queues: 127 00:18:09.766 NVMe Specification Version (VS): 1.3 00:18:09.766 NVMe Specification Version (Identify): 1.3 00:18:09.766 Maximum Queue Entries: 256 00:18:09.766 Contiguous Queues Required: Yes 00:18:09.766 Arbitration Mechanisms Supported 00:18:09.766 Weighted Round Robin: Not Supported 00:18:09.766 Vendor Specific: Not Supported 00:18:09.766 Reset Timeout: 15000 ms 00:18:09.766 Doorbell Stride: 4 bytes 00:18:09.766 NVM Subsystem Reset: Not Supported 00:18:09.766 Command Sets Supported 00:18:09.766 NVM Command Set: Supported 00:18:09.767 Boot Partition: Not Supported 00:18:09.767 Memory Page Size Minimum: 4096 bytes 00:18:09.767 Memory Page Size Maximum: 4096 bytes 00:18:09.767 Persistent Memory Region: Not Supported 00:18:09.767 Optional Asynchronous Events Supported 00:18:09.767 Namespace Attribute Notices: Supported 00:18:09.767 Firmware Activation Notices: Not Supported 00:18:09.767 ANA Change Notices: Not Supported 00:18:09.767 PLE Aggregate Log Change Notices: Not Supported 00:18:09.767 LBA Status Info Alert Notices: Not Supported 00:18:09.767 EGE Aggregate Log Change Notices: Not Supported 00:18:09.767 Normal NVM Subsystem Shutdown event: Not Supported 00:18:09.767 Zone Descriptor Change Notices: Not Supported 00:18:09.767 Discovery Log Change Notices: Not Supported 00:18:09.767 Controller Attributes 00:18:09.767 128-bit Host Identifier: Supported 00:18:09.767 Non-Operational Permissive Mode: Not Supported 00:18:09.767 NVM Sets: Not Supported 00:18:09.767 Read Recovery Levels: Not Supported 00:18:09.767 Endurance Groups: Not Supported 00:18:09.767 Predictable Latency Mode: Not Supported 00:18:09.767 Traffic Based Keep ALive: Not Supported 00:18:09.767 Namespace Granularity: Not Supported 00:18:09.767 SQ Associations: Not Supported 00:18:09.767 UUID List: Not Supported 00:18:09.767 Multi-Domain Subsystem: Not Supported 00:18:09.767 Fixed Capacity Management: Not Supported 00:18:09.767 Variable Capacity Management: Not Supported 00:18:09.767 Delete Endurance Group: Not Supported 00:18:09.767 Delete NVM Set: Not Supported 00:18:09.767 Extended LBA Formats Supported: Not Supported 00:18:09.767 Flexible Data Placement Supported: Not Supported 00:18:09.767 00:18:09.767 Controller Memory Buffer Support 00:18:09.767 ================================ 00:18:09.767 Supported: No 00:18:09.767 00:18:09.767 Persistent Memory Region Support 00:18:09.767 ================================ 00:18:09.767 Supported: No 00:18:09.767 00:18:09.767 Admin Command Set Attributes 00:18:09.767 ============================ 00:18:09.767 Security Send/Receive: Not Supported 00:18:09.767 Format NVM: Not Supported 00:18:09.767 Firmware Activate/Download: Not Supported 00:18:09.767 Namespace Management: Not Supported 00:18:09.767 Device Self-Test: Not Supported 00:18:09.767 Directives: Not Supported 00:18:09.767 NVMe-MI: Not Supported 00:18:09.767 Virtualization Management: Not Supported 00:18:09.767 Doorbell Buffer Config: Not Supported 00:18:09.767 Get LBA Status Capability: Not Supported 00:18:09.767 Command & Feature Lockdown Capability: Not Supported 00:18:09.767 Abort Command Limit: 4 00:18:09.767 Async Event Request Limit: 4 00:18:09.767 Number of Firmware Slots: N/A 00:18:09.767 Firmware Slot 1 Read-Only: N/A 00:18:09.767 Firmware Activation Without Reset: N/A 00:18:09.767 Multiple Update Detection Support: N/A 00:18:09.767 Firmware Update Granularity: No Information Provided 00:18:09.767 Per-Namespace SMART Log: No 00:18:09.767 Asymmetric Namespace Access Log Page: Not Supported 00:18:09.767 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:09.767 Command Effects Log Page: Supported 00:18:09.767 Get Log Page Extended Data: Supported 00:18:09.767 Telemetry Log Pages: Not Supported 00:18:09.767 Persistent Event Log Pages: Not Supported 00:18:09.767 Supported Log Pages Log Page: May Support 00:18:09.767 Commands Supported & Effects Log Page: Not Supported 00:18:09.767 Feature Identifiers & Effects Log Page:May Support 00:18:09.767 NVMe-MI Commands & Effects Log Page: May Support 00:18:09.767 Data Area 4 for Telemetry Log: Not Supported 00:18:09.767 Error Log Page Entries Supported: 128 00:18:09.767 Keep Alive: Supported 00:18:09.767 Keep Alive Granularity: 10000 ms 00:18:09.767 00:18:09.767 NVM Command Set Attributes 00:18:09.767 ========================== 00:18:09.767 Submission Queue Entry Size 00:18:09.767 Max: 64 00:18:09.767 Min: 64 00:18:09.767 Completion Queue Entry Size 00:18:09.767 Max: 16 00:18:09.767 Min: 16 00:18:09.767 Number of Namespaces: 32 00:18:09.767 Compare Command: Supported 00:18:09.767 Write Uncorrectable Command: Not Supported 00:18:09.767 Dataset Management Command: Supported 00:18:09.767 Write Zeroes Command: Supported 00:18:09.767 Set Features Save Field: Not Supported 00:18:09.767 Reservations: Not Supported 00:18:09.767 Timestamp: Not Supported 00:18:09.767 Copy: Supported 00:18:09.767 Volatile Write Cache: Present 00:18:09.767 Atomic Write Unit (Normal): 1 00:18:09.767 Atomic Write Unit (PFail): 1 00:18:09.767 Atomic Compare & Write Unit: 1 00:18:09.767 Fused Compare & Write: Supported 00:18:09.767 Scatter-Gather List 00:18:09.767 SGL Command Set: Supported (Dword aligned) 00:18:09.767 SGL Keyed: Not Supported 00:18:09.767 SGL Bit Bucket Descriptor: Not Supported 00:18:09.767 SGL Metadata Pointer: Not Supported 00:18:09.767 Oversized SGL: Not Supported 00:18:09.767 SGL Metadata Address: Not Supported 00:18:09.767 SGL Offset: Not Supported 00:18:09.767 Transport SGL Data Block: Not Supported 00:18:09.767 Replay Protected Memory Block: Not Supported 00:18:09.767 00:18:09.767 Firmware Slot Information 00:18:09.767 ========================= 00:18:09.767 Active slot: 1 00:18:09.767 Slot 1 Firmware Revision: 25.01 00:18:09.767 00:18:09.767 00:18:09.767 Commands Supported and Effects 00:18:09.767 ============================== 00:18:09.767 Admin Commands 00:18:09.767 -------------- 00:18:09.767 Get Log Page (02h): Supported 00:18:09.767 Identify (06h): Supported 00:18:09.767 Abort (08h): Supported 00:18:09.767 Set Features (09h): Supported 00:18:09.767 Get Features (0Ah): Supported 00:18:09.767 Asynchronous Event Request (0Ch): Supported 00:18:09.767 Keep Alive (18h): Supported 00:18:09.767 I/O Commands 00:18:09.767 ------------ 00:18:09.767 Flush (00h): Supported LBA-Change 00:18:09.767 Write (01h): Supported LBA-Change 00:18:09.767 Read (02h): Supported 00:18:09.767 Compare (05h): Supported 00:18:09.767 Write Zeroes (08h): Supported LBA-Change 00:18:09.767 Dataset Management (09h): Supported LBA-Change 00:18:09.767 Copy (19h): Supported LBA-Change 00:18:09.767 00:18:09.767 Error Log 00:18:09.767 ========= 00:18:09.767 00:18:09.767 Arbitration 00:18:09.767 =========== 00:18:09.767 Arbitration Burst: 1 00:18:09.767 00:18:09.767 Power Management 00:18:09.767 ================ 00:18:09.767 Number of Power States: 1 00:18:09.767 Current Power State: Power State #0 00:18:09.767 Power State #0: 00:18:09.767 Max Power: 0.00 W 00:18:09.767 Non-Operational State: Operational 00:18:09.767 Entry Latency: Not Reported 00:18:09.767 Exit Latency: Not Reported 00:18:09.767 Relative Read Throughput: 0 00:18:09.767 Relative Read Latency: 0 00:18:09.767 Relative Write Throughput: 0 00:18:09.767 Relative Write Latency: 0 00:18:09.767 Idle Power: Not Reported 00:18:09.767 Active Power: Not Reported 00:18:09.767 Non-Operational Permissive Mode: Not Supported 00:18:09.768 00:18:09.768 Health Information 00:18:09.768 ================== 00:18:09.768 Critical Warnings: 00:18:09.768 Available Spare Space: OK 00:18:09.768 Temperature: OK 00:18:09.768 Device Reliability: OK 00:18:09.768 Read Only: No 00:18:09.768 Volatile Memory Backup: OK 00:18:09.768 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:09.768 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:09.768 Available Spare: 0% 00:18:09.768 Available Sp[2024-11-19 23:42:43.862267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:09.768 [2024-11-19 23:42:43.870083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:09.768 [2024-11-19 23:42:43.870145] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:09.768 [2024-11-19 23:42:43.870164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.768 [2024-11-19 23:42:43.870175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.768 [2024-11-19 23:42:43.870185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.768 [2024-11-19 23:42:43.870194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.768 [2024-11-19 23:42:43.870260] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:09.768 [2024-11-19 23:42:43.870281] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:09.768 [2024-11-19 23:42:43.871262] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:09.768 [2024-11-19 23:42:43.871336] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:09.768 [2024-11-19 23:42:43.871370] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:09.768 [2024-11-19 23:42:43.872270] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:09.768 [2024-11-19 23:42:43.872294] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:09.768 [2024-11-19 23:42:43.872345] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:09.768 [2024-11-19 23:42:43.873539] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:09.768 are Threshold: 0% 00:18:09.768 Life Percentage Used: 0% 00:18:09.768 Data Units Read: 0 00:18:09.768 Data Units Written: 0 00:18:09.768 Host Read Commands: 0 00:18:09.768 Host Write Commands: 0 00:18:09.768 Controller Busy Time: 0 minutes 00:18:09.768 Power Cycles: 0 00:18:09.768 Power On Hours: 0 hours 00:18:09.768 Unsafe Shutdowns: 0 00:18:09.768 Unrecoverable Media Errors: 0 00:18:09.768 Lifetime Error Log Entries: 0 00:18:09.768 Warning Temperature Time: 0 minutes 00:18:09.768 Critical Temperature Time: 0 minutes 00:18:09.768 00:18:09.768 Number of Queues 00:18:09.768 ================ 00:18:09.768 Number of I/O Submission Queues: 127 00:18:09.768 Number of I/O Completion Queues: 127 00:18:09.768 00:18:09.768 Active Namespaces 00:18:09.768 ================= 00:18:09.768 Namespace ID:1 00:18:09.768 Error Recovery Timeout: Unlimited 00:18:09.768 Command Set Identifier: NVM (00h) 00:18:09.768 Deallocate: Supported 00:18:09.768 Deallocated/Unwritten Error: Not Supported 00:18:09.768 Deallocated Read Value: Unknown 00:18:09.768 Deallocate in Write Zeroes: Not Supported 00:18:09.768 Deallocated Guard Field: 0xFFFF 00:18:09.768 Flush: Supported 00:18:09.768 Reservation: Supported 00:18:09.768 Namespace Sharing Capabilities: Multiple Controllers 00:18:09.768 Size (in LBAs): 131072 (0GiB) 00:18:09.768 Capacity (in LBAs): 131072 (0GiB) 00:18:09.768 Utilization (in LBAs): 131072 (0GiB) 00:18:09.768 NGUID: 69518A009AEF4C529C6E22F8C452BEC5 00:18:09.768 UUID: 69518a00-9aef-4c52-9c6e-22f8c452bec5 00:18:09.768 Thin Provisioning: Not Supported 00:18:09.768 Per-NS Atomic Units: Yes 00:18:09.768 Atomic Boundary Size (Normal): 0 00:18:09.768 Atomic Boundary Size (PFail): 0 00:18:09.768 Atomic Boundary Offset: 0 00:18:09.768 Maximum Single Source Range Length: 65535 00:18:09.768 Maximum Copy Length: 65535 00:18:09.768 Maximum Source Range Count: 1 00:18:09.768 NGUID/EUI64 Never Reused: No 00:18:09.768 Namespace Write Protected: No 00:18:09.768 Number of LBA Formats: 1 00:18:09.768 Current LBA Format: LBA Format #00 00:18:09.768 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:09.768 00:18:09.768 23:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:10.026 [2024-11-19 23:42:44.112814] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:15.290 Initializing NVMe Controllers 00:18:15.290 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:15.290 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:15.290 Initialization complete. Launching workers. 00:18:15.290 ======================================================== 00:18:15.290 Latency(us) 00:18:15.290 Device Information : IOPS MiB/s Average min max 00:18:15.290 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34128.87 133.32 3749.90 1166.53 9952.11 00:18:15.290 ======================================================== 00:18:15.290 Total : 34128.87 133.32 3749.90 1166.53 9952.11 00:18:15.290 00:18:15.290 [2024-11-19 23:42:49.218483] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:15.290 23:42:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:15.290 [2024-11-19 23:42:49.470162] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:20.557 Initializing NVMe Controllers 00:18:20.557 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:20.557 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:20.557 Initialization complete. Launching workers. 00:18:20.557 ======================================================== 00:18:20.557 Latency(us) 00:18:20.557 Device Information : IOPS MiB/s Average min max 00:18:20.557 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30733.57 120.05 4164.69 1239.74 7637.63 00:18:20.557 ======================================================== 00:18:20.557 Total : 30733.57 120.05 4164.69 1239.74 7637.63 00:18:20.557 00:18:20.557 [2024-11-19 23:42:54.493206] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:20.557 23:42:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:20.557 [2024-11-19 23:42:54.722441] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:25.819 [2024-11-19 23:42:59.864212] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:25.820 Initializing NVMe Controllers 00:18:25.820 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:25.820 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:25.820 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:25.820 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:25.820 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:25.820 Initialization complete. Launching workers. 00:18:25.820 Starting thread on core 2 00:18:25.820 Starting thread on core 3 00:18:25.820 Starting thread on core 1 00:18:25.820 23:42:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:26.077 [2024-11-19 23:43:00.188097] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:29.360 [2024-11-19 23:43:03.245428] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:29.360 Initializing NVMe Controllers 00:18:29.360 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:29.360 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:29.360 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:29.360 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:29.360 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:29.360 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:29.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:29.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:29.360 Initialization complete. Launching workers. 00:18:29.360 Starting thread on core 1 with urgent priority queue 00:18:29.360 Starting thread on core 2 with urgent priority queue 00:18:29.360 Starting thread on core 3 with urgent priority queue 00:18:29.360 Starting thread on core 0 with urgent priority queue 00:18:29.360 SPDK bdev Controller (SPDK2 ) core 0: 6252.00 IO/s 15.99 secs/100000 ios 00:18:29.360 SPDK bdev Controller (SPDK2 ) core 1: 5868.33 IO/s 17.04 secs/100000 ios 00:18:29.360 SPDK bdev Controller (SPDK2 ) core 2: 6390.67 IO/s 15.65 secs/100000 ios 00:18:29.360 SPDK bdev Controller (SPDK2 ) core 3: 4884.00 IO/s 20.48 secs/100000 ios 00:18:29.360 ======================================================== 00:18:29.360 00:18:29.360 23:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:29.360 [2024-11-19 23:43:03.549893] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:29.360 Initializing NVMe Controllers 00:18:29.360 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:29.360 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:29.360 Namespace ID: 1 size: 0GB 00:18:29.360 Initialization complete. 00:18:29.360 INFO: using host memory buffer for IO 00:18:29.360 Hello world! 00:18:29.360 [2024-11-19 23:43:03.558953] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:29.360 23:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:29.618 [2024-11-19 23:43:03.851003] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:30.993 Initializing NVMe Controllers 00:18:30.993 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.993 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:30.993 Initialization complete. Launching workers. 00:18:30.993 submit (in ns) avg, min, max = 10113.9, 3523.3, 4018592.2 00:18:30.993 complete (in ns) avg, min, max = 22715.1, 2066.7, 4015516.7 00:18:30.993 00:18:30.993 Submit histogram 00:18:30.993 ================ 00:18:30.993 Range in us Cumulative Count 00:18:30.993 3.508 - 3.532: 0.1227% ( 16) 00:18:30.993 3.532 - 3.556: 0.4985% ( 49) 00:18:30.993 3.556 - 3.579: 1.9788% ( 193) 00:18:30.993 3.579 - 3.603: 4.8474% ( 374) 00:18:30.993 3.603 - 3.627: 10.7992% ( 776) 00:18:30.993 3.627 - 3.650: 19.6119% ( 1149) 00:18:30.993 3.650 - 3.674: 29.1302% ( 1241) 00:18:30.993 3.674 - 3.698: 37.6745% ( 1114) 00:18:30.993 3.698 - 3.721: 45.4211% ( 1010) 00:18:30.993 3.721 - 3.745: 51.1351% ( 745) 00:18:30.993 3.745 - 3.769: 55.3382% ( 548) 00:18:30.993 3.769 - 3.793: 59.8711% ( 591) 00:18:30.993 3.793 - 3.816: 63.4990% ( 473) 00:18:30.993 3.816 - 3.840: 67.1729% ( 479) 00:18:30.993 3.840 - 3.864: 70.7470% ( 466) 00:18:30.993 3.864 - 3.887: 74.8121% ( 530) 00:18:30.993 3.887 - 3.911: 78.9231% ( 536) 00:18:30.993 3.911 - 3.935: 82.4053% ( 454) 00:18:30.993 3.935 - 3.959: 85.3275% ( 381) 00:18:30.993 3.959 - 3.982: 87.2833% ( 255) 00:18:30.993 3.982 - 4.006: 88.9400% ( 216) 00:18:30.993 4.006 - 4.030: 90.2823% ( 175) 00:18:30.993 4.030 - 4.053: 91.4557% ( 153) 00:18:30.993 4.053 - 4.077: 92.4528% ( 130) 00:18:30.993 4.077 - 4.101: 93.2045% ( 98) 00:18:30.993 4.101 - 4.124: 93.9561% ( 98) 00:18:30.993 4.124 - 4.148: 94.5927% ( 83) 00:18:30.993 4.148 - 4.172: 95.0989% ( 66) 00:18:30.993 4.172 - 4.196: 95.4748% ( 49) 00:18:30.993 4.196 - 4.219: 95.8122% ( 44) 00:18:30.993 4.219 - 4.243: 95.9810% ( 22) 00:18:30.993 4.243 - 4.267: 96.1114% ( 17) 00:18:30.993 4.267 - 4.290: 96.2724% ( 21) 00:18:30.993 4.290 - 4.314: 96.4105% ( 18) 00:18:30.993 4.314 - 4.338: 96.5102% ( 13) 00:18:30.993 4.338 - 4.361: 96.5869% ( 10) 00:18:30.993 4.361 - 4.385: 96.6789% ( 12) 00:18:30.993 4.385 - 4.409: 96.7786% ( 13) 00:18:30.993 4.409 - 4.433: 96.8400% ( 8) 00:18:30.993 4.433 - 4.456: 96.8707% ( 4) 00:18:30.993 4.456 - 4.480: 96.9551% ( 11) 00:18:30.993 4.480 - 4.504: 96.9781% ( 3) 00:18:30.993 4.504 - 4.527: 97.0087% ( 4) 00:18:30.993 4.527 - 4.551: 97.0318% ( 3) 00:18:30.993 4.551 - 4.575: 97.0471% ( 2) 00:18:30.993 4.575 - 4.599: 97.0548% ( 1) 00:18:30.993 4.622 - 4.646: 97.0701% ( 2) 00:18:30.993 4.670 - 4.693: 97.1085% ( 5) 00:18:30.993 4.693 - 4.717: 97.1238% ( 2) 00:18:30.993 4.717 - 4.741: 97.1698% ( 6) 00:18:30.993 4.741 - 4.764: 97.2082% ( 5) 00:18:30.993 4.764 - 4.788: 97.2158% ( 1) 00:18:30.993 4.788 - 4.812: 97.2542% ( 5) 00:18:30.993 4.812 - 4.836: 97.3155% ( 8) 00:18:30.993 4.836 - 4.859: 97.3385% ( 3) 00:18:30.993 4.859 - 4.883: 97.3999% ( 8) 00:18:30.993 4.883 - 4.907: 97.4536% ( 7) 00:18:30.993 4.907 - 4.930: 97.4919% ( 5) 00:18:30.993 4.930 - 4.954: 97.5303% ( 5) 00:18:30.993 4.954 - 4.978: 97.6147% ( 11) 00:18:30.993 4.978 - 5.001: 97.6760% ( 8) 00:18:30.993 5.001 - 5.025: 97.7297% ( 7) 00:18:30.993 5.025 - 5.049: 97.7757% ( 6) 00:18:30.993 5.049 - 5.073: 97.8141% ( 5) 00:18:30.993 5.073 - 5.096: 97.8218% ( 1) 00:18:30.993 5.096 - 5.120: 97.8448% ( 3) 00:18:30.993 5.120 - 5.144: 97.8754% ( 4) 00:18:30.993 5.144 - 5.167: 97.9445% ( 9) 00:18:30.993 5.167 - 5.191: 97.9828% ( 5) 00:18:30.993 5.191 - 5.215: 98.0135% ( 4) 00:18:30.993 5.215 - 5.239: 98.0288% ( 2) 00:18:30.993 5.239 - 5.262: 98.0672% ( 5) 00:18:30.993 5.262 - 5.286: 98.0825% ( 2) 00:18:30.993 5.310 - 5.333: 98.1362% ( 7) 00:18:30.993 5.333 - 5.357: 98.1592% ( 3) 00:18:30.993 5.357 - 5.381: 98.1822% ( 3) 00:18:30.993 5.381 - 5.404: 98.2052% ( 3) 00:18:30.993 5.404 - 5.428: 98.2129% ( 1) 00:18:30.993 5.452 - 5.476: 98.2206% ( 1) 00:18:30.993 5.476 - 5.499: 98.2283% ( 1) 00:18:30.993 5.499 - 5.523: 98.2436% ( 2) 00:18:30.993 5.547 - 5.570: 98.2513% ( 1) 00:18:30.993 5.807 - 5.831: 98.2589% ( 1) 00:18:30.993 5.831 - 5.855: 98.2666% ( 1) 00:18:30.993 5.855 - 5.879: 98.2743% ( 1) 00:18:30.993 6.827 - 6.874: 98.2819% ( 1) 00:18:30.993 7.111 - 7.159: 98.2896% ( 1) 00:18:30.993 7.159 - 7.206: 98.2973% ( 1) 00:18:30.993 7.396 - 7.443: 98.3126% ( 2) 00:18:30.993 7.443 - 7.490: 98.3203% ( 1) 00:18:30.993 7.538 - 7.585: 98.3280% ( 1) 00:18:30.993 7.633 - 7.680: 98.3356% ( 1) 00:18:30.993 7.870 - 7.917: 98.3510% ( 2) 00:18:30.993 8.059 - 8.107: 98.3817% ( 4) 00:18:30.993 8.107 - 8.154: 98.3893% ( 1) 00:18:30.994 8.249 - 8.296: 98.3970% ( 1) 00:18:30.994 8.296 - 8.344: 98.4047% ( 1) 00:18:30.994 8.391 - 8.439: 98.4277% ( 3) 00:18:30.994 8.439 - 8.486: 98.4430% ( 2) 00:18:30.994 8.628 - 8.676: 98.4507% ( 1) 00:18:30.994 8.676 - 8.723: 98.4660% ( 2) 00:18:30.994 8.723 - 8.770: 98.4737% ( 1) 00:18:30.994 8.770 - 8.818: 98.4814% ( 1) 00:18:30.994 8.818 - 8.865: 98.4890% ( 1) 00:18:30.994 8.865 - 8.913: 98.4967% ( 1) 00:18:30.994 8.913 - 8.960: 98.5044% ( 1) 00:18:30.994 8.960 - 9.007: 98.5197% ( 2) 00:18:30.994 9.007 - 9.055: 98.5351% ( 2) 00:18:30.994 9.055 - 9.102: 98.5504% ( 2) 00:18:30.994 9.102 - 9.150: 98.5581% ( 1) 00:18:30.994 9.150 - 9.197: 98.5734% ( 2) 00:18:30.994 9.292 - 9.339: 98.5811% ( 1) 00:18:30.994 9.339 - 9.387: 98.5887% ( 1) 00:18:30.994 9.434 - 9.481: 98.6118% ( 3) 00:18:30.994 9.529 - 9.576: 98.6194% ( 1) 00:18:30.994 9.766 - 9.813: 98.6271% ( 1) 00:18:30.994 9.813 - 9.861: 98.6348% ( 1) 00:18:30.994 9.861 - 9.908: 98.6424% ( 1) 00:18:30.994 9.908 - 9.956: 98.6501% ( 1) 00:18:30.994 10.003 - 10.050: 98.6578% ( 1) 00:18:30.994 10.098 - 10.145: 98.6731% ( 2) 00:18:30.994 10.145 - 10.193: 98.6884% ( 2) 00:18:30.994 10.240 - 10.287: 98.6961% ( 1) 00:18:30.994 10.477 - 10.524: 98.7038% ( 1) 00:18:30.994 10.619 - 10.667: 98.7115% ( 1) 00:18:30.994 11.093 - 11.141: 98.7191% ( 1) 00:18:30.994 11.378 - 11.425: 98.7268% ( 1) 00:18:30.994 11.567 - 11.615: 98.7345% ( 1) 00:18:30.994 11.710 - 11.757: 98.7421% ( 1) 00:18:30.994 12.326 - 12.421: 98.7498% ( 1) 00:18:30.994 12.990 - 13.084: 98.7575% ( 1) 00:18:30.994 13.464 - 13.559: 98.7728% ( 2) 00:18:30.994 13.559 - 13.653: 98.7805% ( 1) 00:18:30.994 13.748 - 13.843: 98.7958% ( 2) 00:18:30.994 13.938 - 14.033: 98.8035% ( 1) 00:18:30.994 14.127 - 14.222: 98.8112% ( 1) 00:18:30.994 14.412 - 14.507: 98.8188% ( 1) 00:18:30.994 14.886 - 14.981: 98.8265% ( 1) 00:18:30.994 14.981 - 15.076: 98.8342% ( 1) 00:18:30.994 15.929 - 16.024: 98.8418% ( 1) 00:18:30.994 17.161 - 17.256: 98.8572% ( 2) 00:18:30.994 17.256 - 17.351: 98.8802% ( 3) 00:18:30.994 17.351 - 17.446: 98.9185% ( 5) 00:18:30.994 17.446 - 17.541: 98.9569% ( 5) 00:18:30.994 17.541 - 17.636: 99.0106% ( 7) 00:18:30.994 17.636 - 17.730: 99.0719% ( 8) 00:18:30.994 17.730 - 17.825: 99.0796% ( 1) 00:18:30.994 17.825 - 17.920: 99.1256% ( 6) 00:18:30.994 17.920 - 18.015: 99.2253% ( 13) 00:18:30.994 18.015 - 18.110: 99.3404% ( 15) 00:18:30.994 18.110 - 18.204: 99.3941% ( 7) 00:18:30.994 18.204 - 18.299: 99.4784% ( 11) 00:18:30.994 18.299 - 18.394: 99.5321% ( 7) 00:18:30.994 18.394 - 18.489: 99.6012% ( 9) 00:18:30.994 18.489 - 18.584: 99.6625% ( 8) 00:18:30.994 18.584 - 18.679: 99.6932% ( 4) 00:18:30.994 18.679 - 18.773: 99.7009% ( 1) 00:18:30.994 18.773 - 18.868: 99.7316% ( 4) 00:18:30.994 18.868 - 18.963: 99.7546% ( 3) 00:18:30.994 18.963 - 19.058: 99.7699% ( 2) 00:18:30.994 19.153 - 19.247: 99.7776% ( 1) 00:18:30.994 19.342 - 19.437: 99.7929% ( 2) 00:18:30.994 19.532 - 19.627: 99.8006% ( 1) 00:18:30.994 19.721 - 19.816: 99.8083% ( 1) 00:18:30.994 23.324 - 23.419: 99.8159% ( 1) 00:18:30.994 24.273 - 24.462: 99.8236% ( 1) 00:18:30.994 25.979 - 26.169: 99.8313% ( 1) 00:18:30.994 26.927 - 27.117: 99.8389% ( 1) 00:18:30.994 28.444 - 28.634: 99.8466% ( 1) 00:18:30.994 3980.705 - 4004.978: 99.9540% ( 14) 00:18:30.994 4004.978 - 4029.250: 100.0000% ( 6) 00:18:30.994 00:18:30.994 Complete histogram 00:18:30.994 ================== 00:18:30.994 Range in us Cumulative Count 00:18:30.994 2.062 - 2.074: 0.7593% ( 99) 00:18:30.994 2.074 - 2.086: 29.9969% ( 3812) 00:18:30.994 2.086 - 2.098: 42.2074% ( 1592) 00:18:30.994 2.098 - 2.110: 45.4748% ( 426) 00:18:30.994 2.110 - 2.121: 55.5837% ( 1318) 00:18:30.994 2.121 - 2.133: 58.1224% ( 331) 00:18:30.994 2.133 - 2.145: 61.9190% ( 495) 00:18:30.994 2.145 - 2.157: 72.9943% ( 1444) 00:18:30.994 2.157 - 2.169: 75.1266% ( 278) 00:18:30.994 2.169 - 2.181: 77.0364% ( 249) 00:18:30.994 2.181 - 2.193: 80.4725% ( 448) 00:18:30.994 2.193 - 2.204: 81.2395% ( 100) 00:18:30.994 2.204 - 2.216: 82.3593% ( 146) 00:18:30.994 2.216 - 2.228: 87.3830% ( 655) 00:18:30.994 2.228 - 2.240: 89.9524% ( 335) 00:18:30.994 2.240 - 2.252: 91.4327% ( 193) 00:18:30.994 2.252 - 2.264: 92.9437% ( 197) 00:18:30.994 2.264 - 2.276: 93.3272% ( 50) 00:18:30.994 2.276 - 2.287: 93.5803% ( 33) 00:18:30.994 2.287 - 2.299: 94.0558% ( 62) 00:18:30.994 2.299 - 2.311: 94.6541% ( 78) 00:18:30.994 2.311 - 2.323: 95.1680% ( 67) 00:18:30.994 2.323 - 2.335: 95.2907% ( 16) 00:18:30.994 2.335 - 2.347: 95.3597% ( 9) 00:18:30.994 2.347 - 2.359: 95.4287% ( 9) 00:18:30.994 2.359 - 2.370: 95.5208% ( 12) 00:18:30.994 2.370 - 2.382: 95.7279% ( 27) 00:18:30.994 2.382 - 2.394: 95.9503% ( 29) 00:18:30.994 2.394 - 2.406: 96.2111% ( 34) 00:18:30.994 2.406 - 2.418: 96.4258% ( 28) 00:18:30.994 2.418 - 2.430: 96.4949% ( 9) 00:18:30.994 2.430 - 2.441: 96.7096% ( 28) 00:18:30.994 2.441 - 2.453: 96.8707% ( 21) 00:18:30.994 2.453 - 2.465: 97.1391% ( 35) 00:18:30.994 2.465 - 2.477: 97.3385% ( 26) 00:18:30.994 2.477 - 2.489: 97.4843% ( 19) 00:18:30.994 2.489 - 2.501: 97.6147% ( 17) 00:18:30.994 2.501 - 2.513: 97.7297% ( 15) 00:18:30.994 2.513 - 2.524: 97.8294% ( 13) 00:18:30.994 2.524 - 2.536: 97.8678% ( 5) 00:18:30.994 2.536 - 2.548: 97.9521% ( 11) 00:18:30.994 2.548 - 2.560: 97.9751% ( 3) 00:18:30.994 2.560 - 2.572: 97.9982% ( 3) 00:18:30.994 2.572 - 2.584: 98.0442% ( 6) 00:18:30.994 2.584 - 2.596: 98.0518% ( 1) 00:18:30.994 2.596 - 2.607: 98.0749% ( 3) 00:18:30.994 2.607 - 2.619: 98.0825% ( 1) 00:18:30.994 2.631 - 2.643: 98.0979% ( 2) 00:18:30.994 2.643 - 2.655: 98.1055% ( 1) 00:18:30.994 2.655 - 2.667: 98.1209% ( 2) 00:18:30.994 2.690 - 2.702: 98.1285% ( 1) 00:18:30.994 2.702 - 2.714: 98.1362% ( 1) 00:18:30.994 2.761 - 2.773: 98.1439% ( 1) 00:18:30.994 2.785 - 2.797: 98.1516% ( 1) 00:18:30.994 2.844 - 2.856: 98.1746% ( 3) 00:18:30.994 2.856 - 2.868: 98.1822% ( 1) 00:18:30.994 2.868 - 2.880: 98.1899% ( 1) 00:18:30.994 2.880 - 2.892: 98.1976% ( 1) 00:18:30.994 2.939 - 2.951: 98.2052% ( 1) 00:18:30.994 2.963 - 2.975: 98.2206% ( 2) 00:18:30.994 2.975 - 2.987: 98.2283% ( 1) 00:18:30.994 3.022 - 3.034: 98.2436% ( 2) 00:18:30.994 3.034 - 3.058: 98.2513% ( 1) 00:18:30.994 3.058 - 3.081: 98.2743% ( 3) 00:18:30.994 3.105 - 3.129: 98.2819% ( 1) 00:18:30.994 3.129 - 3.153: 98.2896% ( 1) 00:18:30.994 3.176 - 3.200: 98.3126% ( 3) 00:18:30.994 3.224 - 3.247: 98.3356% ( 3) 00:18:30.994 3.247 - 3.271: 98.3433% ( 1) 00:18:30.994 3.271 - 3.295: 98.3586% ( 2) 00:18:30.994 3.295 - 3.319: 98.3740% ( 2) 00:18:30.994 3.319 - 3.342: 98.4047% ( 4) 00:18:30.995 3.342 - 3.366: 98.4277% ( 3) 00:18:30.995 3.366 - 3.390: 98.4353% ( 1) 00:18:30.995 3.437 - 3.461: 98.4430% ( 1) 00:18:30.995 3.461 - 3.484: 98.4584% ( 2) 00:18:30.995 3.484 - 3.508: 98.4660% ( 1) 00:18:30.995 3.508 - 3.532: 98.4967% ( 4) 00:18:30.995 3.532 - 3.556: 98.5274% ( 4) 00:18:30.995 3.556 - 3.579: 98.5504% ( 3) 00:18:30.995 3.579 - 3.603: 9[2024-11-19 23:43:04.953946] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:30.995 8.5657% ( 2) 00:18:30.995 3.603 - 3.627: 98.5811% ( 2) 00:18:30.995 3.627 - 3.650: 98.5964% ( 2) 00:18:30.995 3.674 - 3.698: 98.6118% ( 2) 00:18:30.995 3.698 - 3.721: 98.6194% ( 1) 00:18:30.995 3.721 - 3.745: 98.6424% ( 3) 00:18:30.995 3.769 - 3.793: 98.6578% ( 2) 00:18:30.995 3.816 - 3.840: 98.6808% ( 3) 00:18:30.995 3.840 - 3.864: 98.6884% ( 1) 00:18:30.995 3.864 - 3.887: 98.7038% ( 2) 00:18:30.995 3.887 - 3.911: 98.7191% ( 2) 00:18:30.995 3.911 - 3.935: 98.7268% ( 1) 00:18:30.995 3.935 - 3.959: 98.7345% ( 1) 00:18:30.995 3.959 - 3.982: 98.7421% ( 1) 00:18:30.995 3.982 - 4.006: 98.7498% ( 1) 00:18:30.995 4.053 - 4.077: 98.7575% ( 1) 00:18:30.995 4.148 - 4.172: 98.7651% ( 1) 00:18:30.995 4.764 - 4.788: 98.7728% ( 1) 00:18:30.995 5.689 - 5.713: 98.7805% ( 1) 00:18:30.995 6.116 - 6.163: 98.7882% ( 1) 00:18:30.995 6.495 - 6.542: 98.7958% ( 1) 00:18:30.995 6.590 - 6.637: 98.8035% ( 1) 00:18:30.995 6.637 - 6.684: 98.8112% ( 1) 00:18:30.995 6.684 - 6.732: 98.8188% ( 1) 00:18:30.995 6.874 - 6.921: 98.8265% ( 1) 00:18:30.995 7.016 - 7.064: 98.8342% ( 1) 00:18:30.995 7.585 - 7.633: 98.8418% ( 1) 00:18:30.995 8.059 - 8.107: 98.8495% ( 1) 00:18:30.995 9.150 - 9.197: 98.8572% ( 1) 00:18:30.995 10.524 - 10.572: 98.8649% ( 1) 00:18:30.995 12.326 - 12.421: 98.8725% ( 1) 00:18:30.995 15.455 - 15.550: 98.8802% ( 1) 00:18:30.995 15.644 - 15.739: 98.8955% ( 2) 00:18:30.995 15.739 - 15.834: 98.9262% ( 4) 00:18:30.995 15.834 - 15.929: 98.9492% ( 3) 00:18:30.995 15.929 - 16.024: 98.9876% ( 5) 00:18:30.995 16.024 - 16.119: 99.0336% ( 6) 00:18:30.995 16.119 - 16.213: 99.0643% ( 4) 00:18:30.995 16.213 - 16.308: 99.0719% ( 1) 00:18:30.995 16.308 - 16.403: 99.1256% ( 7) 00:18:30.995 16.403 - 16.498: 99.1486% ( 3) 00:18:30.995 16.498 - 16.593: 99.1640% ( 2) 00:18:30.995 16.593 - 16.687: 99.2484% ( 11) 00:18:30.995 16.687 - 16.782: 99.2637% ( 2) 00:18:30.995 16.782 - 16.877: 99.3020% ( 5) 00:18:30.995 16.877 - 16.972: 99.3481% ( 6) 00:18:30.995 16.972 - 17.067: 99.3557% ( 1) 00:18:30.995 17.067 - 17.161: 99.3711% ( 2) 00:18:30.995 17.161 - 17.256: 99.3864% ( 2) 00:18:30.995 17.256 - 17.351: 99.3941% ( 1) 00:18:30.995 17.446 - 17.541: 99.4094% ( 2) 00:18:30.995 17.636 - 17.730: 99.4248% ( 2) 00:18:30.995 17.920 - 18.015: 99.4324% ( 1) 00:18:30.995 18.110 - 18.204: 99.4478% ( 2) 00:18:30.995 18.299 - 18.394: 99.4554% ( 1) 00:18:30.995 26.359 - 26.548: 99.4631% ( 1) 00:18:30.995 33.944 - 34.133: 99.4708% ( 1) 00:18:30.995 34.323 - 34.513: 99.4784% ( 1) 00:18:30.995 157.013 - 157.772: 99.4861% ( 1) 00:18:30.995 2779.212 - 2791.348: 99.4938% ( 1) 00:18:30.995 3980.705 - 4004.978: 99.9540% ( 60) 00:18:30.995 4004.978 - 4029.250: 100.0000% ( 6) 00:18:30.995 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:30.995 [ 00:18:30.995 { 00:18:30.995 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:30.995 "subtype": "Discovery", 00:18:30.995 "listen_addresses": [], 00:18:30.995 "allow_any_host": true, 00:18:30.995 "hosts": [] 00:18:30.995 }, 00:18:30.995 { 00:18:30.995 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:30.995 "subtype": "NVMe", 00:18:30.995 "listen_addresses": [ 00:18:30.995 { 00:18:30.995 "trtype": "VFIOUSER", 00:18:30.995 "adrfam": "IPv4", 00:18:30.995 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:30.995 "trsvcid": "0" 00:18:30.995 } 00:18:30.995 ], 00:18:30.995 "allow_any_host": true, 00:18:30.995 "hosts": [], 00:18:30.995 "serial_number": "SPDK1", 00:18:30.995 "model_number": "SPDK bdev Controller", 00:18:30.995 "max_namespaces": 32, 00:18:30.995 "min_cntlid": 1, 00:18:30.995 "max_cntlid": 65519, 00:18:30.995 "namespaces": [ 00:18:30.995 { 00:18:30.995 "nsid": 1, 00:18:30.995 "bdev_name": "Malloc1", 00:18:30.995 "name": "Malloc1", 00:18:30.995 "nguid": "94E678E91A38420DB707B4AC0DDDBEA3", 00:18:30.995 "uuid": "94e678e9-1a38-420d-b707-b4ac0dddbea3" 00:18:30.995 }, 00:18:30.995 { 00:18:30.995 "nsid": 2, 00:18:30.995 "bdev_name": "Malloc3", 00:18:30.995 "name": "Malloc3", 00:18:30.995 "nguid": "FDC5C2ED9F7646AEB600B184B35467D9", 00:18:30.995 "uuid": "fdc5c2ed-9f76-46ae-b600-b184b35467d9" 00:18:30.995 } 00:18:30.995 ] 00:18:30.995 }, 00:18:30.995 { 00:18:30.995 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:30.995 "subtype": "NVMe", 00:18:30.995 "listen_addresses": [ 00:18:30.995 { 00:18:30.995 "trtype": "VFIOUSER", 00:18:30.995 "adrfam": "IPv4", 00:18:30.995 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:30.995 "trsvcid": "0" 00:18:30.995 } 00:18:30.995 ], 00:18:30.995 "allow_any_host": true, 00:18:30.995 "hosts": [], 00:18:30.995 "serial_number": "SPDK2", 00:18:30.995 "model_number": "SPDK bdev Controller", 00:18:30.995 "max_namespaces": 32, 00:18:30.995 "min_cntlid": 1, 00:18:30.995 "max_cntlid": 65519, 00:18:30.995 "namespaces": [ 00:18:30.995 { 00:18:30.995 "nsid": 1, 00:18:30.995 "bdev_name": "Malloc2", 00:18:30.995 "name": "Malloc2", 00:18:30.995 "nguid": "69518A009AEF4C529C6E22F8C452BEC5", 00:18:30.995 "uuid": "69518a00-9aef-4c52-9c6e-22f8c452bec5" 00:18:30.995 } 00:18:30.995 ] 00:18:30.995 } 00:18:30.995 ] 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=172184 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:30.995 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:30.996 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:30.996 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:31.329 [2024-11-19 23:43:05.457638] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:31.329 Malloc4 00:18:31.329 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:31.586 [2024-11-19 23:43:05.844531] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:31.586 23:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:31.586 Asynchronous Event Request test 00:18:31.586 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.587 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:31.587 Registering asynchronous event callbacks... 00:18:31.587 Starting namespace attribute notice tests for all controllers... 00:18:31.587 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:31.587 aer_cb - Changed Namespace 00:18:31.587 Cleaning up... 00:18:31.844 [ 00:18:31.844 { 00:18:31.844 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:31.844 "subtype": "Discovery", 00:18:31.844 "listen_addresses": [], 00:18:31.844 "allow_any_host": true, 00:18:31.844 "hosts": [] 00:18:31.844 }, 00:18:31.844 { 00:18:31.844 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:31.844 "subtype": "NVMe", 00:18:31.844 "listen_addresses": [ 00:18:31.844 { 00:18:31.844 "trtype": "VFIOUSER", 00:18:31.844 "adrfam": "IPv4", 00:18:31.844 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:31.844 "trsvcid": "0" 00:18:31.844 } 00:18:31.844 ], 00:18:31.844 "allow_any_host": true, 00:18:31.844 "hosts": [], 00:18:31.844 "serial_number": "SPDK1", 00:18:31.844 "model_number": "SPDK bdev Controller", 00:18:31.844 "max_namespaces": 32, 00:18:31.844 "min_cntlid": 1, 00:18:31.844 "max_cntlid": 65519, 00:18:31.844 "namespaces": [ 00:18:31.844 { 00:18:31.844 "nsid": 1, 00:18:31.844 "bdev_name": "Malloc1", 00:18:31.844 "name": "Malloc1", 00:18:31.844 "nguid": "94E678E91A38420DB707B4AC0DDDBEA3", 00:18:31.844 "uuid": "94e678e9-1a38-420d-b707-b4ac0dddbea3" 00:18:31.844 }, 00:18:31.844 { 00:18:31.844 "nsid": 2, 00:18:31.844 "bdev_name": "Malloc3", 00:18:31.844 "name": "Malloc3", 00:18:31.844 "nguid": "FDC5C2ED9F7646AEB600B184B35467D9", 00:18:31.844 "uuid": "fdc5c2ed-9f76-46ae-b600-b184b35467d9" 00:18:31.844 } 00:18:31.844 ] 00:18:31.844 }, 00:18:31.844 { 00:18:31.844 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:31.844 "subtype": "NVMe", 00:18:31.844 "listen_addresses": [ 00:18:31.844 { 00:18:31.844 "trtype": "VFIOUSER", 00:18:31.845 "adrfam": "IPv4", 00:18:31.845 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:31.845 "trsvcid": "0" 00:18:31.845 } 00:18:31.845 ], 00:18:31.845 "allow_any_host": true, 00:18:31.845 "hosts": [], 00:18:31.845 "serial_number": "SPDK2", 00:18:31.845 "model_number": "SPDK bdev Controller", 00:18:31.845 "max_namespaces": 32, 00:18:31.845 "min_cntlid": 1, 00:18:31.845 "max_cntlid": 65519, 00:18:31.845 "namespaces": [ 00:18:31.845 { 00:18:31.845 "nsid": 1, 00:18:31.845 "bdev_name": "Malloc2", 00:18:31.845 "name": "Malloc2", 00:18:31.845 "nguid": "69518A009AEF4C529C6E22F8C452BEC5", 00:18:31.845 "uuid": "69518a00-9aef-4c52-9c6e-22f8c452bec5" 00:18:31.845 }, 00:18:31.845 { 00:18:31.845 "nsid": 2, 00:18:31.845 "bdev_name": "Malloc4", 00:18:31.845 "name": "Malloc4", 00:18:31.845 "nguid": "39434D5C769B46678E2A3D47C38BDBF1", 00:18:31.845 "uuid": "39434d5c-769b-4667-8e2a-3d47c38bdbf1" 00:18:31.845 } 00:18:31.845 ] 00:18:31.845 } 00:18:31.845 ] 00:18:31.845 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 172184 00:18:31.845 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:31.845 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 166586 00:18:31.845 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 166586 ']' 00:18:31.845 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 166586 00:18:31.845 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:31.845 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.845 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 166586 00:18:32.103 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.103 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.103 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 166586' 00:18:32.103 killing process with pid 166586 00:18:32.103 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 166586 00:18:32.103 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 166586 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=172332 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 172332' 00:18:32.361 Process pid: 172332 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 172332 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 172332 ']' 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.361 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:32.361 [2024-11-19 23:43:06.512782] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:32.361 [2024-11-19 23:43:06.513817] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:18:32.361 [2024-11-19 23:43:06.513887] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.361 [2024-11-19 23:43:06.581834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.361 [2024-11-19 23:43:06.629036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.361 [2024-11-19 23:43:06.629112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.361 [2024-11-19 23:43:06.629137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.361 [2024-11-19 23:43:06.629148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.361 [2024-11-19 23:43:06.629159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.361 [2024-11-19 23:43:06.630628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.361 [2024-11-19 23:43:06.630693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.361 [2024-11-19 23:43:06.630772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.361 [2024-11-19 23:43:06.630775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.619 [2024-11-19 23:43:06.713210] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:32.619 [2024-11-19 23:43:06.713378] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:32.619 [2024-11-19 23:43:06.713682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:32.619 [2024-11-19 23:43:06.714258] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:32.619 [2024-11-19 23:43:06.714504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:32.619 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.619 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:32.619 23:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:33.556 23:43:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:33.814 23:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:33.814 23:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:33.814 23:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:33.814 23:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:33.814 23:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:34.381 Malloc1 00:18:34.381 23:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:34.640 23:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:34.899 23:43:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:35.156 23:43:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:35.156 23:43:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:35.156 23:43:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:35.413 Malloc2 00:18:35.671 23:43:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:35.928 23:43:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:36.185 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:36.444 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:36.444 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 172332 00:18:36.444 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 172332 ']' 00:18:36.444 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 172332 00:18:36.444 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:36.444 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.444 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 172332 00:18:36.444 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.444 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.444 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 172332' 00:18:36.444 killing process with pid 172332 00:18:36.444 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 172332 00:18:36.444 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 172332 00:18:36.702 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:36.702 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:36.702 00:18:36.702 real 0m53.593s 00:18:36.702 user 3m26.873s 00:18:36.702 sys 0m3.979s 00:18:36.702 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.702 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:36.702 ************************************ 00:18:36.702 END TEST nvmf_vfio_user 00:18:36.702 ************************************ 00:18:36.702 23:43:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:36.702 23:43:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:36.702 23:43:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.702 23:43:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:36.702 ************************************ 00:18:36.702 START TEST nvmf_vfio_user_nvme_compliance 00:18:36.702 ************************************ 00:18:36.702 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:36.702 * Looking for test storage... 00:18:36.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:36.702 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:36.702 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:18:36.702 23:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:36.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.961 --rc genhtml_branch_coverage=1 00:18:36.961 --rc genhtml_function_coverage=1 00:18:36.961 --rc genhtml_legend=1 00:18:36.961 --rc geninfo_all_blocks=1 00:18:36.961 --rc geninfo_unexecuted_blocks=1 00:18:36.961 00:18:36.961 ' 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:36.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.961 --rc genhtml_branch_coverage=1 00:18:36.961 --rc genhtml_function_coverage=1 00:18:36.961 --rc genhtml_legend=1 00:18:36.961 --rc geninfo_all_blocks=1 00:18:36.961 --rc geninfo_unexecuted_blocks=1 00:18:36.961 00:18:36.961 ' 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:36.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.961 --rc genhtml_branch_coverage=1 00:18:36.961 --rc genhtml_function_coverage=1 00:18:36.961 --rc genhtml_legend=1 00:18:36.961 --rc geninfo_all_blocks=1 00:18:36.961 --rc geninfo_unexecuted_blocks=1 00:18:36.961 00:18:36.961 ' 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:36.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.961 --rc genhtml_branch_coverage=1 00:18:36.961 --rc genhtml_function_coverage=1 00:18:36.961 --rc genhtml_legend=1 00:18:36.961 --rc geninfo_all_blocks=1 00:18:36.961 --rc geninfo_unexecuted_blocks=1 00:18:36.961 00:18:36.961 ' 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.961 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:36.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=172942 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 172942' 00:18:36.962 Process pid: 172942 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 172942 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 172942 ']' 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.962 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:36.962 [2024-11-19 23:43:11.116293] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:18:36.962 [2024-11-19 23:43:11.116400] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.962 [2024-11-19 23:43:11.183706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:36.962 [2024-11-19 23:43:11.230184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.962 [2024-11-19 23:43:11.230249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.962 [2024-11-19 23:43:11.230277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.962 [2024-11-19 23:43:11.230291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.962 [2024-11-19 23:43:11.230304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.962 [2024-11-19 23:43:11.231838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.962 [2024-11-19 23:43:11.231891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.962 [2024-11-19 23:43:11.231908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.220 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.220 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:37.220 23:43:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:38.153 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:38.153 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:38.153 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:38.153 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.153 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:38.153 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.153 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:38.153 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:38.153 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:38.154 malloc0 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.154 23:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:38.411 00:18:38.411 00:18:38.411 CUnit - A unit testing framework for C - Version 2.1-3 00:18:38.411 http://cunit.sourceforge.net/ 00:18:38.411 00:18:38.411 00:18:38.411 Suite: nvme_compliance 00:18:38.411 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 23:43:12.589567] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.411 [2024-11-19 23:43:12.590994] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:38.411 [2024-11-19 23:43:12.591019] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:38.411 [2024-11-19 23:43:12.591032] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:38.411 [2024-11-19 23:43:12.592583] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.411 passed 00:18:38.411 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 23:43:12.677173] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.411 [2024-11-19 23:43:12.680194] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.411 passed 00:18:38.668 Test: admin_identify_ns ...[2024-11-19 23:43:12.765664] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.668 [2024-11-19 23:43:12.825092] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:38.668 [2024-11-19 23:43:12.833087] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:38.668 [2024-11-19 23:43:12.854222] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.668 passed 00:18:38.668 Test: admin_get_features_mandatory_features ...[2024-11-19 23:43:12.939391] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.668 [2024-11-19 23:43:12.942423] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.668 passed 00:18:38.926 Test: admin_get_features_optional_features ...[2024-11-19 23:43:13.028995] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.926 [2024-11-19 23:43:13.032017] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.926 passed 00:18:38.926 Test: admin_set_features_number_of_queues ...[2024-11-19 23:43:13.113221] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.926 [2024-11-19 23:43:13.218202] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.183 passed 00:18:39.183 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 23:43:13.304724] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.183 [2024-11-19 23:43:13.307750] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.183 passed 00:18:39.183 Test: admin_get_log_page_with_lpo ...[2024-11-19 23:43:13.389989] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.183 [2024-11-19 23:43:13.454090] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:39.183 [2024-11-19 23:43:13.467167] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.440 passed 00:18:39.440 Test: fabric_property_get ...[2024-11-19 23:43:13.550096] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.441 [2024-11-19 23:43:13.551392] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:39.441 [2024-11-19 23:43:13.553119] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.441 passed 00:18:39.441 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 23:43:13.638708] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.441 [2024-11-19 23:43:13.639984] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:39.441 [2024-11-19 23:43:13.641726] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.441 passed 00:18:39.441 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 23:43:13.726968] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.698 [2024-11-19 23:43:13.808092] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:39.698 [2024-11-19 23:43:13.824083] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:39.698 [2024-11-19 23:43:13.832204] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.698 passed 00:18:39.698 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 23:43:13.911817] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.698 [2024-11-19 23:43:13.913159] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:39.698 [2024-11-19 23:43:13.914837] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.698 passed 00:18:39.698 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 23:43:13.999096] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.956 [2024-11-19 23:43:14.077097] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:39.956 [2024-11-19 23:43:14.101082] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:39.956 [2024-11-19 23:43:14.106190] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.956 passed 00:18:39.956 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 23:43:14.189824] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:39.956 [2024-11-19 23:43:14.191150] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:39.956 [2024-11-19 23:43:14.191193] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:39.956 [2024-11-19 23:43:14.192845] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:39.956 passed 00:18:40.214 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 23:43:14.273092] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.214 [2024-11-19 23:43:14.366080] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:40.214 [2024-11-19 23:43:14.374091] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:40.214 [2024-11-19 23:43:14.382095] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:40.214 [2024-11-19 23:43:14.390077] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:40.214 [2024-11-19 23:43:14.419198] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.214 passed 00:18:40.214 Test: admin_create_io_sq_verify_pc ...[2024-11-19 23:43:14.503581] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:40.214 [2024-11-19 23:43:14.523114] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:40.472 [2024-11-19 23:43:14.540956] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:40.472 passed 00:18:40.472 Test: admin_create_io_qp_max_qps ...[2024-11-19 23:43:14.622531] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:41.844 [2024-11-19 23:43:15.730087] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:41.844 [2024-11-19 23:43:16.111028] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:41.844 passed 00:18:42.102 Test: admin_create_io_sq_shared_cq ...[2024-11-19 23:43:16.193632] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:42.102 [2024-11-19 23:43:16.325085] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:42.102 [2024-11-19 23:43:16.362171] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:42.102 passed 00:18:42.102 00:18:42.102 Run Summary: Type Total Ran Passed Failed Inactive 00:18:42.102 suites 1 1 n/a 0 0 00:18:42.102 tests 18 18 18 0 0 00:18:42.102 asserts 360 360 360 0 n/a 00:18:42.102 00:18:42.102 Elapsed time = 1.564 seconds 00:18:42.360 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 172942 00:18:42.360 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 172942 ']' 00:18:42.360 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 172942 00:18:42.360 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:42.360 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.360 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 172942 00:18:42.360 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.360 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.360 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 172942' 00:18:42.360 killing process with pid 172942 00:18:42.360 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 172942 00:18:42.360 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 172942 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:42.619 00:18:42.619 real 0m5.786s 00:18:42.619 user 0m16.305s 00:18:42.619 sys 0m0.518s 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:42.619 ************************************ 00:18:42.619 END TEST nvmf_vfio_user_nvme_compliance 00:18:42.619 ************************************ 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:42.619 ************************************ 00:18:42.619 START TEST nvmf_vfio_user_fuzz 00:18:42.619 ************************************ 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:42.619 * Looking for test storage... 00:18:42.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.619 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:42.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.619 --rc genhtml_branch_coverage=1 00:18:42.619 --rc genhtml_function_coverage=1 00:18:42.619 --rc genhtml_legend=1 00:18:42.619 --rc geninfo_all_blocks=1 00:18:42.619 --rc geninfo_unexecuted_blocks=1 00:18:42.619 00:18:42.619 ' 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:42.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.620 --rc genhtml_branch_coverage=1 00:18:42.620 --rc genhtml_function_coverage=1 00:18:42.620 --rc genhtml_legend=1 00:18:42.620 --rc geninfo_all_blocks=1 00:18:42.620 --rc geninfo_unexecuted_blocks=1 00:18:42.620 00:18:42.620 ' 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:42.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.620 --rc genhtml_branch_coverage=1 00:18:42.620 --rc genhtml_function_coverage=1 00:18:42.620 --rc genhtml_legend=1 00:18:42.620 --rc geninfo_all_blocks=1 00:18:42.620 --rc geninfo_unexecuted_blocks=1 00:18:42.620 00:18:42.620 ' 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:42.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.620 --rc genhtml_branch_coverage=1 00:18:42.620 --rc genhtml_function_coverage=1 00:18:42.620 --rc genhtml_legend=1 00:18:42.620 --rc geninfo_all_blocks=1 00:18:42.620 --rc geninfo_unexecuted_blocks=1 00:18:42.620 00:18:42.620 ' 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:42.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=173667 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 173667' 00:18:42.620 Process pid: 173667 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 173667 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 173667 ']' 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.620 23:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.186 23:43:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.186 23:43:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:43.186 23:43:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.120 malloc0 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:44.120 23:43:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:16.205 Fuzzing completed. Shutting down the fuzz application 00:19:16.205 00:19:16.205 Dumping successful admin opcodes: 00:19:16.205 8, 9, 10, 24, 00:19:16.206 Dumping successful io opcodes: 00:19:16.206 0, 00:19:16.206 NS: 0x20000081ef00 I/O qp, Total commands completed: 595265, total successful commands: 2300, random_seed: 743749760 00:19:16.206 NS: 0x20000081ef00 admin qp, Total commands completed: 76124, total successful commands: 593, random_seed: 1179748480 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 173667 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 173667 ']' 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 173667 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 173667 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 173667' 00:19:16.206 killing process with pid 173667 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 173667 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 173667 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:16.206 00:19:16.206 real 0m32.186s 00:19:16.206 user 0m31.743s 00:19:16.206 sys 0m28.932s 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:16.206 ************************************ 00:19:16.206 END TEST nvmf_vfio_user_fuzz 00:19:16.206 ************************************ 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.206 ************************************ 00:19:16.206 START TEST nvmf_auth_target 00:19:16.206 ************************************ 00:19:16.206 23:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:16.206 * Looking for test storage... 00:19:16.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:16.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.206 --rc genhtml_branch_coverage=1 00:19:16.206 --rc genhtml_function_coverage=1 00:19:16.206 --rc genhtml_legend=1 00:19:16.206 --rc geninfo_all_blocks=1 00:19:16.206 --rc geninfo_unexecuted_blocks=1 00:19:16.206 00:19:16.206 ' 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:16.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.206 --rc genhtml_branch_coverage=1 00:19:16.206 --rc genhtml_function_coverage=1 00:19:16.206 --rc genhtml_legend=1 00:19:16.206 --rc geninfo_all_blocks=1 00:19:16.206 --rc geninfo_unexecuted_blocks=1 00:19:16.206 00:19:16.206 ' 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:16.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.206 --rc genhtml_branch_coverage=1 00:19:16.206 --rc genhtml_function_coverage=1 00:19:16.206 --rc genhtml_legend=1 00:19:16.206 --rc geninfo_all_blocks=1 00:19:16.206 --rc geninfo_unexecuted_blocks=1 00:19:16.206 00:19:16.206 ' 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:16.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.206 --rc genhtml_branch_coverage=1 00:19:16.206 --rc genhtml_function_coverage=1 00:19:16.206 --rc genhtml_legend=1 00:19:16.206 --rc geninfo_all_blocks=1 00:19:16.206 --rc geninfo_unexecuted_blocks=1 00:19:16.206 00:19:16.206 ' 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.206 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:16.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:16.207 23:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:17.143 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:17.143 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.143 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:17.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:17.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:17.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:19:17.144 00:19:17.144 --- 10.0.0.2 ping statistics --- 00:19:17.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.144 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:19:17.144 00:19:17.144 --- 10.0.0.1 ping statistics --- 00:19:17.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.144 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:17.144 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.402 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=179114 00:19:17.402 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:17.402 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 179114 00:19:17.402 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 179114 ']' 00:19:17.402 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.402 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.402 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.402 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.402 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=179139 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4f37bdece9f081803672925e5a9119e671ed39a46072c82f 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Icn 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4f37bdece9f081803672925e5a9119e671ed39a46072c82f 0 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4f37bdece9f081803672925e5a9119e671ed39a46072c82f 0 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4f37bdece9f081803672925e5a9119e671ed39a46072c82f 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Icn 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Icn 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Icn 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0a9764928f537d15e7c92725a42d65bde11e0180cc7be7d7397b0991ea4bfd24 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dXC 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0a9764928f537d15e7c92725a42d65bde11e0180cc7be7d7397b0991ea4bfd24 3 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0a9764928f537d15e7c92725a42d65bde11e0180cc7be7d7397b0991ea4bfd24 3 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0a9764928f537d15e7c92725a42d65bde11e0180cc7be7d7397b0991ea4bfd24 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dXC 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dXC 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.dXC 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0c316456802c20dc384ee0af52138103 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Pex 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0c316456802c20dc384ee0af52138103 1 00:19:17.661 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0c316456802c20dc384ee0af52138103 1 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0c316456802c20dc384ee0af52138103 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Pex 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Pex 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Pex 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e5efd352f3aeaa901b3b8dd96b5dae86a8303465fa5da09d 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.RmZ 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e5efd352f3aeaa901b3b8dd96b5dae86a8303465fa5da09d 2 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e5efd352f3aeaa901b3b8dd96b5dae86a8303465fa5da09d 2 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e5efd352f3aeaa901b3b8dd96b5dae86a8303465fa5da09d 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.RmZ 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.RmZ 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.RmZ 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d73a28d880c1c5dc46d3594730172dcdbe42eb82a18aa804 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.a1q 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d73a28d880c1c5dc46d3594730172dcdbe42eb82a18aa804 2 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d73a28d880c1c5dc46d3594730172dcdbe42eb82a18aa804 2 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d73a28d880c1c5dc46d3594730172dcdbe42eb82a18aa804 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:17.662 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:17.921 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.a1q 00:19:17.921 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.a1q 00:19:17.921 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.a1q 00:19:17.921 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:17.921 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:17.921 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.921 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:17.921 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:17.921 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:17.921 23:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7e883f2379c08f124bdd4d6273f468d6 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.uGc 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7e883f2379c08f124bdd4d6273f468d6 1 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7e883f2379c08f124bdd4d6273f468d6 1 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7e883f2379c08f124bdd4d6273f468d6 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.uGc 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.uGc 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.uGc 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ea752c6ceb4646cfad0bd4548155388878b1e49f9697e68f3119b6959082fb60 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.r19 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ea752c6ceb4646cfad0bd4548155388878b1e49f9697e68f3119b6959082fb60 3 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ea752c6ceb4646cfad0bd4548155388878b1e49f9697e68f3119b6959082fb60 3 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ea752c6ceb4646cfad0bd4548155388878b1e49f9697e68f3119b6959082fb60 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.r19 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.r19 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.r19 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 179114 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 179114 ']' 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.921 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.922 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.922 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.922 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.179 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.179 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:18.179 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 179139 /var/tmp/host.sock 00:19:18.179 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 179139 ']' 00:19:18.179 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:18.179 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.179 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:18.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:18.179 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.179 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Icn 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Icn 00:19:18.437 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Icn 00:19:18.695 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.dXC ]] 00:19:18.695 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dXC 00:19:18.695 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.695 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.695 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.695 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dXC 00:19:18.695 23:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dXC 00:19:18.951 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:18.951 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Pex 00:19:18.951 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.951 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.951 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.951 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Pex 00:19:18.951 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Pex 00:19:19.208 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.RmZ ]] 00:19:19.208 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RmZ 00:19:19.208 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.208 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.466 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.466 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RmZ 00:19:19.466 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RmZ 00:19:19.724 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:19.724 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.a1q 00:19:19.724 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.724 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.724 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.724 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.a1q 00:19:19.724 23:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.a1q 00:19:19.981 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.uGc ]] 00:19:19.981 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uGc 00:19:19.982 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.982 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.982 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.982 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uGc 00:19:19.982 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uGc 00:19:20.239 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:20.239 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.r19 00:19:20.239 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.239 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.239 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.239 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.r19 00:19:20.239 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.r19 00:19:20.497 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:20.497 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:20.497 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.497 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.497 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:20.498 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.756 23:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.014 00:19:21.014 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.014 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.014 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.272 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.272 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.272 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.272 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.272 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.272 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.272 { 00:19:21.272 "cntlid": 1, 00:19:21.272 "qid": 0, 00:19:21.272 "state": "enabled", 00:19:21.272 "thread": "nvmf_tgt_poll_group_000", 00:19:21.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:21.272 "listen_address": { 00:19:21.272 "trtype": "TCP", 00:19:21.272 "adrfam": "IPv4", 00:19:21.272 "traddr": "10.0.0.2", 00:19:21.272 "trsvcid": "4420" 00:19:21.272 }, 00:19:21.272 "peer_address": { 00:19:21.272 "trtype": "TCP", 00:19:21.272 "adrfam": "IPv4", 00:19:21.272 "traddr": "10.0.0.1", 00:19:21.272 "trsvcid": "37386" 00:19:21.272 }, 00:19:21.272 "auth": { 00:19:21.272 "state": "completed", 00:19:21.272 "digest": "sha256", 00:19:21.272 "dhgroup": "null" 00:19:21.272 } 00:19:21.272 } 00:19:21.272 ]' 00:19:21.272 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.272 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.272 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.531 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:21.531 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.531 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.531 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.531 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.790 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:19:21.790 23:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:19:22.794 23:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.794 23:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.794 23:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.794 23:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.794 23:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.794 23:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.794 23:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.794 23:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:23.051 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:23.051 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.051 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:23.051 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:23.051 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:23.051 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.052 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.052 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.052 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.052 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.052 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.052 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.052 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.309 00:19:23.309 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.309 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.309 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.567 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.567 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.567 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.567 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.567 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.567 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.567 { 00:19:23.567 "cntlid": 3, 00:19:23.567 "qid": 0, 00:19:23.567 "state": "enabled", 00:19:23.567 "thread": "nvmf_tgt_poll_group_000", 00:19:23.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:23.568 "listen_address": { 00:19:23.568 "trtype": "TCP", 00:19:23.568 "adrfam": "IPv4", 00:19:23.568 "traddr": "10.0.0.2", 00:19:23.568 "trsvcid": "4420" 00:19:23.568 }, 00:19:23.568 "peer_address": { 00:19:23.568 "trtype": "TCP", 00:19:23.568 "adrfam": "IPv4", 00:19:23.568 "traddr": "10.0.0.1", 00:19:23.568 "trsvcid": "37412" 00:19:23.568 }, 00:19:23.568 "auth": { 00:19:23.568 "state": "completed", 00:19:23.568 "digest": "sha256", 00:19:23.568 "dhgroup": "null" 00:19:23.568 } 00:19:23.568 } 00:19:23.568 ]' 00:19:23.568 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.568 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.568 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.568 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:23.568 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.825 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.825 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.825 23:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.084 23:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:19:24.084 23:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:19:25.018 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.018 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.018 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.018 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.018 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.018 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.018 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:25.018 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.275 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.532 00:19:25.532 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.532 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.532 23:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.791 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.791 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.791 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.791 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.791 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.791 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.791 { 00:19:25.791 "cntlid": 5, 00:19:25.791 "qid": 0, 00:19:25.791 "state": "enabled", 00:19:25.791 "thread": "nvmf_tgt_poll_group_000", 00:19:25.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:25.791 "listen_address": { 00:19:25.791 "trtype": "TCP", 00:19:25.791 "adrfam": "IPv4", 00:19:25.791 "traddr": "10.0.0.2", 00:19:25.791 "trsvcid": "4420" 00:19:25.791 }, 00:19:25.791 "peer_address": { 00:19:25.791 "trtype": "TCP", 00:19:25.791 "adrfam": "IPv4", 00:19:25.791 "traddr": "10.0.0.1", 00:19:25.791 "trsvcid": "55484" 00:19:25.791 }, 00:19:25.791 "auth": { 00:19:25.791 "state": "completed", 00:19:25.791 "digest": "sha256", 00:19:25.791 "dhgroup": "null" 00:19:25.791 } 00:19:25.791 } 00:19:25.791 ]' 00:19:25.791 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.049 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.049 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.049 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:26.049 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.049 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.049 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.049 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.306 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:19:26.306 23:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:19:27.239 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.239 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.239 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.239 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.239 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.239 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.239 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.239 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.498 23:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.756 00:19:27.756 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.756 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.756 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.321 { 00:19:28.321 "cntlid": 7, 00:19:28.321 "qid": 0, 00:19:28.321 "state": "enabled", 00:19:28.321 "thread": "nvmf_tgt_poll_group_000", 00:19:28.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:28.321 "listen_address": { 00:19:28.321 "trtype": "TCP", 00:19:28.321 "adrfam": "IPv4", 00:19:28.321 "traddr": "10.0.0.2", 00:19:28.321 "trsvcid": "4420" 00:19:28.321 }, 00:19:28.321 "peer_address": { 00:19:28.321 "trtype": "TCP", 00:19:28.321 "adrfam": "IPv4", 00:19:28.321 "traddr": "10.0.0.1", 00:19:28.321 "trsvcid": "55512" 00:19:28.321 }, 00:19:28.321 "auth": { 00:19:28.321 "state": "completed", 00:19:28.321 "digest": "sha256", 00:19:28.321 "dhgroup": "null" 00:19:28.321 } 00:19:28.321 } 00:19:28.321 ]' 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.321 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.579 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:19:28.579 23:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:19:29.512 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.513 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.513 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.513 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.513 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.513 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.513 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.513 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:29.513 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.770 23:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.028 00:19:30.028 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.028 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.029 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.594 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.594 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.594 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.594 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.594 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.594 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.594 { 00:19:30.594 "cntlid": 9, 00:19:30.594 "qid": 0, 00:19:30.594 "state": "enabled", 00:19:30.594 "thread": "nvmf_tgt_poll_group_000", 00:19:30.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:30.594 "listen_address": { 00:19:30.594 "trtype": "TCP", 00:19:30.594 "adrfam": "IPv4", 00:19:30.594 "traddr": "10.0.0.2", 00:19:30.594 "trsvcid": "4420" 00:19:30.594 }, 00:19:30.594 "peer_address": { 00:19:30.594 "trtype": "TCP", 00:19:30.594 "adrfam": "IPv4", 00:19:30.594 "traddr": "10.0.0.1", 00:19:30.594 "trsvcid": "55544" 00:19:30.594 }, 00:19:30.594 "auth": { 00:19:30.594 "state": "completed", 00:19:30.594 "digest": "sha256", 00:19:30.594 "dhgroup": "ffdhe2048" 00:19:30.594 } 00:19:30.594 } 00:19:30.594 ]' 00:19:30.594 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.594 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.594 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.595 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.595 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.595 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.595 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.595 23:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.852 23:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:19:30.852 23:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:19:31.786 23:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.786 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.786 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.786 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.786 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.786 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.786 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:31.786 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.043 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.301 00:19:32.559 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.559 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.559 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.817 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.817 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.817 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.817 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.817 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.817 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.817 { 00:19:32.817 "cntlid": 11, 00:19:32.817 "qid": 0, 00:19:32.817 "state": "enabled", 00:19:32.817 "thread": "nvmf_tgt_poll_group_000", 00:19:32.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:32.817 "listen_address": { 00:19:32.817 "trtype": "TCP", 00:19:32.817 "adrfam": "IPv4", 00:19:32.817 "traddr": "10.0.0.2", 00:19:32.817 "trsvcid": "4420" 00:19:32.817 }, 00:19:32.817 "peer_address": { 00:19:32.817 "trtype": "TCP", 00:19:32.817 "adrfam": "IPv4", 00:19:32.817 "traddr": "10.0.0.1", 00:19:32.817 "trsvcid": "55562" 00:19:32.817 }, 00:19:32.817 "auth": { 00:19:32.817 "state": "completed", 00:19:32.817 "digest": "sha256", 00:19:32.817 "dhgroup": "ffdhe2048" 00:19:32.817 } 00:19:32.817 } 00:19:32.817 ]' 00:19:32.817 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.817 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.817 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.817 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:32.817 23:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.817 23:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.817 23:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.817 23:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.075 23:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:19:33.075 23:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:19:34.007 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.007 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.007 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.007 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.007 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.007 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.007 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.008 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.265 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.830 00:19:34.830 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.831 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.831 23:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.097 { 00:19:35.097 "cntlid": 13, 00:19:35.097 "qid": 0, 00:19:35.097 "state": "enabled", 00:19:35.097 "thread": "nvmf_tgt_poll_group_000", 00:19:35.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.097 "listen_address": { 00:19:35.097 "trtype": "TCP", 00:19:35.097 "adrfam": "IPv4", 00:19:35.097 "traddr": "10.0.0.2", 00:19:35.097 "trsvcid": "4420" 00:19:35.097 }, 00:19:35.097 "peer_address": { 00:19:35.097 "trtype": "TCP", 00:19:35.097 "adrfam": "IPv4", 00:19:35.097 "traddr": "10.0.0.1", 00:19:35.097 "trsvcid": "55598" 00:19:35.097 }, 00:19:35.097 "auth": { 00:19:35.097 "state": "completed", 00:19:35.097 "digest": "sha256", 00:19:35.097 "dhgroup": "ffdhe2048" 00:19:35.097 } 00:19:35.097 } 00:19:35.097 ]' 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.097 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.355 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:19:35.355 23:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:19:36.288 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.289 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.289 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.289 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.289 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.289 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.289 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.289 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.547 23:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.113 00:19:37.113 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.113 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.113 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.371 { 00:19:37.371 "cntlid": 15, 00:19:37.371 "qid": 0, 00:19:37.371 "state": "enabled", 00:19:37.371 "thread": "nvmf_tgt_poll_group_000", 00:19:37.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:37.371 "listen_address": { 00:19:37.371 "trtype": "TCP", 00:19:37.371 "adrfam": "IPv4", 00:19:37.371 "traddr": "10.0.0.2", 00:19:37.371 "trsvcid": "4420" 00:19:37.371 }, 00:19:37.371 "peer_address": { 00:19:37.371 "trtype": "TCP", 00:19:37.371 "adrfam": "IPv4", 00:19:37.371 "traddr": "10.0.0.1", 00:19:37.371 "trsvcid": "50324" 00:19:37.371 }, 00:19:37.371 "auth": { 00:19:37.371 "state": "completed", 00:19:37.371 "digest": "sha256", 00:19:37.371 "dhgroup": "ffdhe2048" 00:19:37.371 } 00:19:37.371 } 00:19:37.371 ]' 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.371 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.629 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:19:37.629 23:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:19:38.564 23:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.564 23:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.564 23:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.564 23:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.564 23:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.564 23:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.564 23:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.564 23:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.564 23:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.822 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.388 00:19:39.388 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.388 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.388 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.646 { 00:19:39.646 "cntlid": 17, 00:19:39.646 "qid": 0, 00:19:39.646 "state": "enabled", 00:19:39.646 "thread": "nvmf_tgt_poll_group_000", 00:19:39.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:39.646 "listen_address": { 00:19:39.646 "trtype": "TCP", 00:19:39.646 "adrfam": "IPv4", 00:19:39.646 "traddr": "10.0.0.2", 00:19:39.646 "trsvcid": "4420" 00:19:39.646 }, 00:19:39.646 "peer_address": { 00:19:39.646 "trtype": "TCP", 00:19:39.646 "adrfam": "IPv4", 00:19:39.646 "traddr": "10.0.0.1", 00:19:39.646 "trsvcid": "50364" 00:19:39.646 }, 00:19:39.646 "auth": { 00:19:39.646 "state": "completed", 00:19:39.646 "digest": "sha256", 00:19:39.646 "dhgroup": "ffdhe3072" 00:19:39.646 } 00:19:39.646 } 00:19:39.646 ]' 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.646 23:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.905 23:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:19:39.905 23:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:19:40.837 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.837 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.837 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.837 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.837 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.837 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.837 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.838 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.096 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:41.096 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.096 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.096 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:41.096 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:41.096 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.096 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.096 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.096 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.354 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.354 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.354 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.354 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.612 00:19:41.612 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.612 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.612 23:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.870 { 00:19:41.870 "cntlid": 19, 00:19:41.870 "qid": 0, 00:19:41.870 "state": "enabled", 00:19:41.870 "thread": "nvmf_tgt_poll_group_000", 00:19:41.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:41.870 "listen_address": { 00:19:41.870 "trtype": "TCP", 00:19:41.870 "adrfam": "IPv4", 00:19:41.870 "traddr": "10.0.0.2", 00:19:41.870 "trsvcid": "4420" 00:19:41.870 }, 00:19:41.870 "peer_address": { 00:19:41.870 "trtype": "TCP", 00:19:41.870 "adrfam": "IPv4", 00:19:41.870 "traddr": "10.0.0.1", 00:19:41.870 "trsvcid": "50384" 00:19:41.870 }, 00:19:41.870 "auth": { 00:19:41.870 "state": "completed", 00:19:41.870 "digest": "sha256", 00:19:41.870 "dhgroup": "ffdhe3072" 00:19:41.870 } 00:19:41.870 } 00:19:41.870 ]' 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.870 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.435 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:19:42.436 23:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:19:43.370 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.370 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.370 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.370 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.370 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.370 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.370 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.370 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.628 23:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.886 00:19:43.886 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.886 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.886 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.145 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.145 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.145 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.145 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.145 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.145 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.145 { 00:19:44.145 "cntlid": 21, 00:19:44.145 "qid": 0, 00:19:44.145 "state": "enabled", 00:19:44.145 "thread": "nvmf_tgt_poll_group_000", 00:19:44.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.145 "listen_address": { 00:19:44.145 "trtype": "TCP", 00:19:44.145 "adrfam": "IPv4", 00:19:44.145 "traddr": "10.0.0.2", 00:19:44.145 "trsvcid": "4420" 00:19:44.145 }, 00:19:44.145 "peer_address": { 00:19:44.145 "trtype": "TCP", 00:19:44.145 "adrfam": "IPv4", 00:19:44.145 "traddr": "10.0.0.1", 00:19:44.145 "trsvcid": "50420" 00:19:44.145 }, 00:19:44.145 "auth": { 00:19:44.145 "state": "completed", 00:19:44.145 "digest": "sha256", 00:19:44.145 "dhgroup": "ffdhe3072" 00:19:44.145 } 00:19:44.145 } 00:19:44.145 ]' 00:19:44.145 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.145 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.145 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.145 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.145 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.403 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.403 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.403 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.661 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:19:44.661 23:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:19:45.592 23:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.592 23:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.593 23:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.593 23:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.593 23:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.593 23:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.593 23:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.593 23:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.851 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.109 00:19:46.109 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.109 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.109 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.367 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.367 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.367 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.367 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.367 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.367 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.367 { 00:19:46.367 "cntlid": 23, 00:19:46.367 "qid": 0, 00:19:46.367 "state": "enabled", 00:19:46.367 "thread": "nvmf_tgt_poll_group_000", 00:19:46.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:46.367 "listen_address": { 00:19:46.367 "trtype": "TCP", 00:19:46.367 "adrfam": "IPv4", 00:19:46.367 "traddr": "10.0.0.2", 00:19:46.367 "trsvcid": "4420" 00:19:46.367 }, 00:19:46.367 "peer_address": { 00:19:46.367 "trtype": "TCP", 00:19:46.367 "adrfam": "IPv4", 00:19:46.367 "traddr": "10.0.0.1", 00:19:46.367 "trsvcid": "39456" 00:19:46.367 }, 00:19:46.367 "auth": { 00:19:46.367 "state": "completed", 00:19:46.367 "digest": "sha256", 00:19:46.367 "dhgroup": "ffdhe3072" 00:19:46.367 } 00:19:46.367 } 00:19:46.367 ]' 00:19:46.367 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.625 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.625 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.625 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.625 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.625 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.625 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.625 23:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.890 23:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:19:46.890 23:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:19:47.916 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.916 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.916 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.916 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.916 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.916 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.916 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.916 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:47.916 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.174 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.432 00:19:48.432 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.432 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.432 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.689 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.689 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.689 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.689 23:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.946 23:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.946 23:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.946 { 00:19:48.946 "cntlid": 25, 00:19:48.946 "qid": 0, 00:19:48.946 "state": "enabled", 00:19:48.946 "thread": "nvmf_tgt_poll_group_000", 00:19:48.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:48.946 "listen_address": { 00:19:48.946 "trtype": "TCP", 00:19:48.946 "adrfam": "IPv4", 00:19:48.946 "traddr": "10.0.0.2", 00:19:48.946 "trsvcid": "4420" 00:19:48.946 }, 00:19:48.946 "peer_address": { 00:19:48.946 "trtype": "TCP", 00:19:48.946 "adrfam": "IPv4", 00:19:48.946 "traddr": "10.0.0.1", 00:19:48.946 "trsvcid": "39476" 00:19:48.946 }, 00:19:48.946 "auth": { 00:19:48.946 "state": "completed", 00:19:48.946 "digest": "sha256", 00:19:48.946 "dhgroup": "ffdhe4096" 00:19:48.946 } 00:19:48.946 } 00:19:48.946 ]' 00:19:48.946 23:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.946 23:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.946 23:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.946 23:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.946 23:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.946 23:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.946 23:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.946 23:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.204 23:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:19:49.204 23:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:19:50.137 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.137 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.137 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.137 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.137 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.137 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.137 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.137 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.395 23:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.960 00:19:50.960 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.960 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.960 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.218 { 00:19:51.218 "cntlid": 27, 00:19:51.218 "qid": 0, 00:19:51.218 "state": "enabled", 00:19:51.218 "thread": "nvmf_tgt_poll_group_000", 00:19:51.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.218 "listen_address": { 00:19:51.218 "trtype": "TCP", 00:19:51.218 "adrfam": "IPv4", 00:19:51.218 "traddr": "10.0.0.2", 00:19:51.218 "trsvcid": "4420" 00:19:51.218 }, 00:19:51.218 "peer_address": { 00:19:51.218 "trtype": "TCP", 00:19:51.218 "adrfam": "IPv4", 00:19:51.218 "traddr": "10.0.0.1", 00:19:51.218 "trsvcid": "39490" 00:19:51.218 }, 00:19:51.218 "auth": { 00:19:51.218 "state": "completed", 00:19:51.218 "digest": "sha256", 00:19:51.218 "dhgroup": "ffdhe4096" 00:19:51.218 } 00:19:51.218 } 00:19:51.218 ]' 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.218 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.476 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:19:51.476 23:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:19:52.409 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.409 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.409 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.409 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.409 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.409 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.409 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.409 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.666 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:52.666 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.666 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.666 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:52.667 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:52.667 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.667 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.667 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.667 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.667 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.667 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.667 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.667 23:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.231 00:19:53.231 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.231 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.231 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.489 { 00:19:53.489 "cntlid": 29, 00:19:53.489 "qid": 0, 00:19:53.489 "state": "enabled", 00:19:53.489 "thread": "nvmf_tgt_poll_group_000", 00:19:53.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.489 "listen_address": { 00:19:53.489 "trtype": "TCP", 00:19:53.489 "adrfam": "IPv4", 00:19:53.489 "traddr": "10.0.0.2", 00:19:53.489 "trsvcid": "4420" 00:19:53.489 }, 00:19:53.489 "peer_address": { 00:19:53.489 "trtype": "TCP", 00:19:53.489 "adrfam": "IPv4", 00:19:53.489 "traddr": "10.0.0.1", 00:19:53.489 "trsvcid": "39510" 00:19:53.489 }, 00:19:53.489 "auth": { 00:19:53.489 "state": "completed", 00:19:53.489 "digest": "sha256", 00:19:53.489 "dhgroup": "ffdhe4096" 00:19:53.489 } 00:19:53.489 } 00:19:53.489 ]' 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.489 23:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.747 23:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:19:53.747 23:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:19:54.680 23:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.680 23:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.680 23:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.680 23:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.938 23:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.938 23:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.938 23:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.938 23:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.195 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:55.195 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.195 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.195 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:55.195 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:55.195 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.195 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:55.195 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.195 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.196 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.196 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:55.196 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.196 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.453 00:19:55.453 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.453 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.453 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.712 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.712 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.712 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.712 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.712 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.712 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.712 { 00:19:55.712 "cntlid": 31, 00:19:55.712 "qid": 0, 00:19:55.712 "state": "enabled", 00:19:55.712 "thread": "nvmf_tgt_poll_group_000", 00:19:55.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:55.712 "listen_address": { 00:19:55.712 "trtype": "TCP", 00:19:55.712 "adrfam": "IPv4", 00:19:55.712 "traddr": "10.0.0.2", 00:19:55.712 "trsvcid": "4420" 00:19:55.712 }, 00:19:55.712 "peer_address": { 00:19:55.712 "trtype": "TCP", 00:19:55.712 "adrfam": "IPv4", 00:19:55.712 "traddr": "10.0.0.1", 00:19:55.712 "trsvcid": "36068" 00:19:55.712 }, 00:19:55.712 "auth": { 00:19:55.712 "state": "completed", 00:19:55.712 "digest": "sha256", 00:19:55.712 "dhgroup": "ffdhe4096" 00:19:55.712 } 00:19:55.712 } 00:19:55.712 ]' 00:19:55.712 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.712 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.712 23:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.970 23:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:55.970 23:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.970 23:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.970 23:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.970 23:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.228 23:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:19:56.228 23:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:19:57.161 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.161 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.161 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.161 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.161 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.161 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.161 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.161 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.161 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.419 23:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.986 00:19:57.986 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.986 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.986 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.244 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.244 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.244 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.244 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.244 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.244 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.244 { 00:19:58.244 "cntlid": 33, 00:19:58.244 "qid": 0, 00:19:58.244 "state": "enabled", 00:19:58.244 "thread": "nvmf_tgt_poll_group_000", 00:19:58.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.244 "listen_address": { 00:19:58.244 "trtype": "TCP", 00:19:58.244 "adrfam": "IPv4", 00:19:58.244 "traddr": "10.0.0.2", 00:19:58.244 "trsvcid": "4420" 00:19:58.244 }, 00:19:58.244 "peer_address": { 00:19:58.244 "trtype": "TCP", 00:19:58.244 "adrfam": "IPv4", 00:19:58.244 "traddr": "10.0.0.1", 00:19:58.244 "trsvcid": "36096" 00:19:58.244 }, 00:19:58.244 "auth": { 00:19:58.244 "state": "completed", 00:19:58.244 "digest": "sha256", 00:19:58.244 "dhgroup": "ffdhe6144" 00:19:58.244 } 00:19:58.244 } 00:19:58.244 ]' 00:19:58.244 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.244 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.244 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.244 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.244 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.502 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.502 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.502 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.759 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:19:58.759 23:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:19:59.692 23:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.692 23:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.692 23:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.692 23:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.692 23:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.692 23:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.692 23:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.692 23:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.950 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.515 00:20:00.515 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.515 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.515 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.773 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.773 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.773 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.773 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.773 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.773 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.773 { 00:20:00.773 "cntlid": 35, 00:20:00.773 "qid": 0, 00:20:00.773 "state": "enabled", 00:20:00.773 "thread": "nvmf_tgt_poll_group_000", 00:20:00.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.773 "listen_address": { 00:20:00.773 "trtype": "TCP", 00:20:00.773 "adrfam": "IPv4", 00:20:00.773 "traddr": "10.0.0.2", 00:20:00.773 "trsvcid": "4420" 00:20:00.773 }, 00:20:00.773 "peer_address": { 00:20:00.773 "trtype": "TCP", 00:20:00.773 "adrfam": "IPv4", 00:20:00.773 "traddr": "10.0.0.1", 00:20:00.773 "trsvcid": "36138" 00:20:00.773 }, 00:20:00.773 "auth": { 00:20:00.773 "state": "completed", 00:20:00.773 "digest": "sha256", 00:20:00.773 "dhgroup": "ffdhe6144" 00:20:00.773 } 00:20:00.773 } 00:20:00.773 ]' 00:20:00.773 23:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.773 23:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.773 23:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.773 23:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.773 23:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.031 23:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.031 23:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.031 23:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.289 23:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:20:01.289 23:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:20:02.221 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.221 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.221 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.221 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.221 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.221 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.221 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.221 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.479 23:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.046 00:20:03.046 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.046 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.046 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.303 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.303 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.303 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.303 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.303 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.303 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.303 { 00:20:03.303 "cntlid": 37, 00:20:03.303 "qid": 0, 00:20:03.303 "state": "enabled", 00:20:03.303 "thread": "nvmf_tgt_poll_group_000", 00:20:03.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.303 "listen_address": { 00:20:03.303 "trtype": "TCP", 00:20:03.303 "adrfam": "IPv4", 00:20:03.303 "traddr": "10.0.0.2", 00:20:03.303 "trsvcid": "4420" 00:20:03.303 }, 00:20:03.303 "peer_address": { 00:20:03.303 "trtype": "TCP", 00:20:03.303 "adrfam": "IPv4", 00:20:03.303 "traddr": "10.0.0.1", 00:20:03.303 "trsvcid": "36156" 00:20:03.303 }, 00:20:03.303 "auth": { 00:20:03.303 "state": "completed", 00:20:03.303 "digest": "sha256", 00:20:03.303 "dhgroup": "ffdhe6144" 00:20:03.303 } 00:20:03.303 } 00:20:03.303 ]' 00:20:03.303 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.303 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.303 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.303 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:03.303 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.304 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.304 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.304 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.562 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:20:03.562 23:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:20:04.496 23:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.496 23:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.496 23:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.496 23:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.496 23:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.496 23:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.496 23:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:04.496 23:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:04.754 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:04.754 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.755 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.755 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:04.755 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:04.755 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.755 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:04.755 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.755 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.755 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.755 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:04.755 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.755 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.325 00:20:05.582 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.582 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.582 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.840 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.840 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.840 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.840 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.840 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.840 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.840 { 00:20:05.840 "cntlid": 39, 00:20:05.840 "qid": 0, 00:20:05.840 "state": "enabled", 00:20:05.840 "thread": "nvmf_tgt_poll_group_000", 00:20:05.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.840 "listen_address": { 00:20:05.840 "trtype": "TCP", 00:20:05.840 "adrfam": "IPv4", 00:20:05.840 "traddr": "10.0.0.2", 00:20:05.840 "trsvcid": "4420" 00:20:05.840 }, 00:20:05.840 "peer_address": { 00:20:05.840 "trtype": "TCP", 00:20:05.840 "adrfam": "IPv4", 00:20:05.840 "traddr": "10.0.0.1", 00:20:05.840 "trsvcid": "36170" 00:20:05.840 }, 00:20:05.840 "auth": { 00:20:05.840 "state": "completed", 00:20:05.840 "digest": "sha256", 00:20:05.840 "dhgroup": "ffdhe6144" 00:20:05.840 } 00:20:05.840 } 00:20:05.840 ]' 00:20:05.840 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.840 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.840 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.840 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:05.840 23:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.840 23:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.840 23:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.840 23:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.099 23:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:20:06.099 23:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:20:07.032 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.032 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.032 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.032 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.032 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.032 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.032 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.032 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.032 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.290 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:07.290 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.290 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.290 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:07.290 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:07.548 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.548 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.548 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.548 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.548 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.548 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.548 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.548 23:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.481 00:20:08.481 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.481 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.481 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.739 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.739 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.739 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.739 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.739 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.739 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.739 { 00:20:08.739 "cntlid": 41, 00:20:08.739 "qid": 0, 00:20:08.739 "state": "enabled", 00:20:08.739 "thread": "nvmf_tgt_poll_group_000", 00:20:08.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.739 "listen_address": { 00:20:08.739 "trtype": "TCP", 00:20:08.739 "adrfam": "IPv4", 00:20:08.739 "traddr": "10.0.0.2", 00:20:08.739 "trsvcid": "4420" 00:20:08.739 }, 00:20:08.739 "peer_address": { 00:20:08.739 "trtype": "TCP", 00:20:08.739 "adrfam": "IPv4", 00:20:08.739 "traddr": "10.0.0.1", 00:20:08.739 "trsvcid": "34896" 00:20:08.739 }, 00:20:08.739 "auth": { 00:20:08.739 "state": "completed", 00:20:08.739 "digest": "sha256", 00:20:08.739 "dhgroup": "ffdhe8192" 00:20:08.739 } 00:20:08.739 } 00:20:08.739 ]' 00:20:08.739 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.739 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.739 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.739 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.739 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.740 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.740 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.740 23:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.997 23:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:20:08.997 23:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:20:09.931 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.931 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.931 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.931 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.931 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.931 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.931 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.931 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.496 23:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.429 00:20:11.429 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.429 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.429 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.429 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.429 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.429 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.429 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.429 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.429 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.429 { 00:20:11.429 "cntlid": 43, 00:20:11.429 "qid": 0, 00:20:11.429 "state": "enabled", 00:20:11.429 "thread": "nvmf_tgt_poll_group_000", 00:20:11.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:11.429 "listen_address": { 00:20:11.429 "trtype": "TCP", 00:20:11.429 "adrfam": "IPv4", 00:20:11.429 "traddr": "10.0.0.2", 00:20:11.429 "trsvcid": "4420" 00:20:11.429 }, 00:20:11.429 "peer_address": { 00:20:11.429 "trtype": "TCP", 00:20:11.429 "adrfam": "IPv4", 00:20:11.429 "traddr": "10.0.0.1", 00:20:11.429 "trsvcid": "34918" 00:20:11.429 }, 00:20:11.429 "auth": { 00:20:11.429 "state": "completed", 00:20:11.429 "digest": "sha256", 00:20:11.429 "dhgroup": "ffdhe8192" 00:20:11.429 } 00:20:11.429 } 00:20:11.429 ]' 00:20:11.429 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.686 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.686 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.686 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.687 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.687 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.687 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.687 23:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.946 23:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:20:11.946 23:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:20:12.942 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.942 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.942 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.942 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.942 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.942 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.942 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.942 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.200 23:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.134 00:20:14.134 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.134 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.134 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.392 { 00:20:14.392 "cntlid": 45, 00:20:14.392 "qid": 0, 00:20:14.392 "state": "enabled", 00:20:14.392 "thread": "nvmf_tgt_poll_group_000", 00:20:14.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:14.392 "listen_address": { 00:20:14.392 "trtype": "TCP", 00:20:14.392 "adrfam": "IPv4", 00:20:14.392 "traddr": "10.0.0.2", 00:20:14.392 "trsvcid": "4420" 00:20:14.392 }, 00:20:14.392 "peer_address": { 00:20:14.392 "trtype": "TCP", 00:20:14.392 "adrfam": "IPv4", 00:20:14.392 "traddr": "10.0.0.1", 00:20:14.392 "trsvcid": "34960" 00:20:14.392 }, 00:20:14.392 "auth": { 00:20:14.392 "state": "completed", 00:20:14.392 "digest": "sha256", 00:20:14.392 "dhgroup": "ffdhe8192" 00:20:14.392 } 00:20:14.392 } 00:20:14.392 ]' 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.392 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.957 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:20:14.957 23:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:20:15.890 23:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.890 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.890 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.890 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.890 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.890 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.890 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:15.890 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.148 23:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.081 00:20:17.081 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.081 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.081 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.339 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.339 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.339 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.339 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.339 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.339 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.339 { 00:20:17.339 "cntlid": 47, 00:20:17.339 "qid": 0, 00:20:17.339 "state": "enabled", 00:20:17.339 "thread": "nvmf_tgt_poll_group_000", 00:20:17.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.339 "listen_address": { 00:20:17.339 "trtype": "TCP", 00:20:17.339 "adrfam": "IPv4", 00:20:17.339 "traddr": "10.0.0.2", 00:20:17.339 "trsvcid": "4420" 00:20:17.339 }, 00:20:17.339 "peer_address": { 00:20:17.339 "trtype": "TCP", 00:20:17.339 "adrfam": "IPv4", 00:20:17.339 "traddr": "10.0.0.1", 00:20:17.339 "trsvcid": "47426" 00:20:17.339 }, 00:20:17.339 "auth": { 00:20:17.339 "state": "completed", 00:20:17.339 "digest": "sha256", 00:20:17.339 "dhgroup": "ffdhe8192" 00:20:17.339 } 00:20:17.339 } 00:20:17.339 ]' 00:20:17.339 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.339 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.339 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.339 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:17.339 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.596 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.596 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.596 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.854 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:20:17.854 23:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:20:18.788 23:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.788 23:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.788 23:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.788 23:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.788 23:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.788 23:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:18.788 23:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.788 23:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.788 23:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.788 23:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.046 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:19.046 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.046 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.046 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:19.046 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.046 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.047 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.047 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.047 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.047 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.047 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.047 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.047 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.612 00:20:19.612 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.612 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.612 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.612 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.612 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.612 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.612 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.870 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.870 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.870 { 00:20:19.870 "cntlid": 49, 00:20:19.870 "qid": 0, 00:20:19.870 "state": "enabled", 00:20:19.870 "thread": "nvmf_tgt_poll_group_000", 00:20:19.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.870 "listen_address": { 00:20:19.870 "trtype": "TCP", 00:20:19.870 "adrfam": "IPv4", 00:20:19.870 "traddr": "10.0.0.2", 00:20:19.870 "trsvcid": "4420" 00:20:19.870 }, 00:20:19.870 "peer_address": { 00:20:19.870 "trtype": "TCP", 00:20:19.870 "adrfam": "IPv4", 00:20:19.870 "traddr": "10.0.0.1", 00:20:19.870 "trsvcid": "47454" 00:20:19.870 }, 00:20:19.870 "auth": { 00:20:19.870 "state": "completed", 00:20:19.870 "digest": "sha384", 00:20:19.870 "dhgroup": "null" 00:20:19.870 } 00:20:19.870 } 00:20:19.870 ]' 00:20:19.870 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.870 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.870 23:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.870 23:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:19.870 23:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.870 23:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.870 23:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.870 23:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.128 23:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:20:20.128 23:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:20:21.062 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.062 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.062 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.062 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.062 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.062 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.062 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:21.062 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.627 23:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.885 00:20:21.885 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.885 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.885 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.143 { 00:20:22.143 "cntlid": 51, 00:20:22.143 "qid": 0, 00:20:22.143 "state": "enabled", 00:20:22.143 "thread": "nvmf_tgt_poll_group_000", 00:20:22.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.143 "listen_address": { 00:20:22.143 "trtype": "TCP", 00:20:22.143 "adrfam": "IPv4", 00:20:22.143 "traddr": "10.0.0.2", 00:20:22.143 "trsvcid": "4420" 00:20:22.143 }, 00:20:22.143 "peer_address": { 00:20:22.143 "trtype": "TCP", 00:20:22.143 "adrfam": "IPv4", 00:20:22.143 "traddr": "10.0.0.1", 00:20:22.143 "trsvcid": "47490" 00:20:22.143 }, 00:20:22.143 "auth": { 00:20:22.143 "state": "completed", 00:20:22.143 "digest": "sha384", 00:20:22.143 "dhgroup": "null" 00:20:22.143 } 00:20:22.143 } 00:20:22.143 ]' 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.143 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.401 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:20:22.401 23:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:20:23.774 23:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.774 23:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.774 23:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.774 23:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.774 23:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.774 23:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.774 23:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:23.774 23:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:23.774 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:23.774 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.774 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.774 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:23.774 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.774 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.775 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.775 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.775 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.775 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.775 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.775 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.775 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.340 00:20:24.340 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.340 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.340 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.598 { 00:20:24.598 "cntlid": 53, 00:20:24.598 "qid": 0, 00:20:24.598 "state": "enabled", 00:20:24.598 "thread": "nvmf_tgt_poll_group_000", 00:20:24.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.598 "listen_address": { 00:20:24.598 "trtype": "TCP", 00:20:24.598 "adrfam": "IPv4", 00:20:24.598 "traddr": "10.0.0.2", 00:20:24.598 "trsvcid": "4420" 00:20:24.598 }, 00:20:24.598 "peer_address": { 00:20:24.598 "trtype": "TCP", 00:20:24.598 "adrfam": "IPv4", 00:20:24.598 "traddr": "10.0.0.1", 00:20:24.598 "trsvcid": "47530" 00:20:24.598 }, 00:20:24.598 "auth": { 00:20:24.598 "state": "completed", 00:20:24.598 "digest": "sha384", 00:20:24.598 "dhgroup": "null" 00:20:24.598 } 00:20:24.598 } 00:20:24.598 ]' 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.598 23:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.857 23:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:20:24.857 23:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:20:25.789 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.789 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.789 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.789 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.789 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.789 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.789 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.789 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:26.354 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:26.354 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.354 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.354 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:26.354 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:26.355 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.355 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:26.355 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.355 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.355 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.355 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:26.355 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.355 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.612 00:20:26.612 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.612 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.612 23:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.870 { 00:20:26.870 "cntlid": 55, 00:20:26.870 "qid": 0, 00:20:26.870 "state": "enabled", 00:20:26.870 "thread": "nvmf_tgt_poll_group_000", 00:20:26.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:26.870 "listen_address": { 00:20:26.870 "trtype": "TCP", 00:20:26.870 "adrfam": "IPv4", 00:20:26.870 "traddr": "10.0.0.2", 00:20:26.870 "trsvcid": "4420" 00:20:26.870 }, 00:20:26.870 "peer_address": { 00:20:26.870 "trtype": "TCP", 00:20:26.870 "adrfam": "IPv4", 00:20:26.870 "traddr": "10.0.0.1", 00:20:26.870 "trsvcid": "41184" 00:20:26.870 }, 00:20:26.870 "auth": { 00:20:26.870 "state": "completed", 00:20:26.870 "digest": "sha384", 00:20:26.870 "dhgroup": "null" 00:20:26.870 } 00:20:26.870 } 00:20:26.870 ]' 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.870 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.128 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:20:27.128 23:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.501 23:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.759 00:20:28.759 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.759 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.759 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.017 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.017 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.017 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.017 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.017 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.017 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.017 { 00:20:29.017 "cntlid": 57, 00:20:29.017 "qid": 0, 00:20:29.017 "state": "enabled", 00:20:29.017 "thread": "nvmf_tgt_poll_group_000", 00:20:29.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.017 "listen_address": { 00:20:29.017 "trtype": "TCP", 00:20:29.017 "adrfam": "IPv4", 00:20:29.017 "traddr": "10.0.0.2", 00:20:29.017 "trsvcid": "4420" 00:20:29.017 }, 00:20:29.017 "peer_address": { 00:20:29.017 "trtype": "TCP", 00:20:29.017 "adrfam": "IPv4", 00:20:29.017 "traddr": "10.0.0.1", 00:20:29.017 "trsvcid": "41212" 00:20:29.017 }, 00:20:29.017 "auth": { 00:20:29.017 "state": "completed", 00:20:29.017 "digest": "sha384", 00:20:29.017 "dhgroup": "ffdhe2048" 00:20:29.017 } 00:20:29.017 } 00:20:29.017 ]' 00:20:29.017 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.275 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.275 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.275 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:29.275 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.275 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.275 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.275 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.533 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:20:29.533 23:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:20:30.465 23:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.465 23:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.465 23:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.465 23:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.465 23:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.465 23:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.465 23:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:30.465 23:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.030 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.288 00:20:31.288 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.288 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.288 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.546 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.546 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.546 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.546 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.546 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.546 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.546 { 00:20:31.546 "cntlid": 59, 00:20:31.546 "qid": 0, 00:20:31.546 "state": "enabled", 00:20:31.546 "thread": "nvmf_tgt_poll_group_000", 00:20:31.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:31.546 "listen_address": { 00:20:31.546 "trtype": "TCP", 00:20:31.546 "adrfam": "IPv4", 00:20:31.546 "traddr": "10.0.0.2", 00:20:31.546 "trsvcid": "4420" 00:20:31.546 }, 00:20:31.546 "peer_address": { 00:20:31.546 "trtype": "TCP", 00:20:31.546 "adrfam": "IPv4", 00:20:31.546 "traddr": "10.0.0.1", 00:20:31.546 "trsvcid": "41224" 00:20:31.546 }, 00:20:31.546 "auth": { 00:20:31.546 "state": "completed", 00:20:31.546 "digest": "sha384", 00:20:31.546 "dhgroup": "ffdhe2048" 00:20:31.546 } 00:20:31.546 } 00:20:31.546 ]' 00:20:31.546 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.546 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.546 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.546 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:31.546 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.546 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.804 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.804 23:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.062 23:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:20:32.062 23:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:20:32.997 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.997 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.997 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.997 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.997 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.997 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.997 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:32.997 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:33.255 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:33.255 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.256 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.256 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:33.256 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.256 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.256 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.256 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.256 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.256 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.256 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.256 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.256 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.513 00:20:33.513 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.513 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.513 23:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.078 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.078 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.078 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.078 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.078 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.078 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.078 { 00:20:34.078 "cntlid": 61, 00:20:34.078 "qid": 0, 00:20:34.078 "state": "enabled", 00:20:34.078 "thread": "nvmf_tgt_poll_group_000", 00:20:34.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.078 "listen_address": { 00:20:34.078 "trtype": "TCP", 00:20:34.078 "adrfam": "IPv4", 00:20:34.078 "traddr": "10.0.0.2", 00:20:34.078 "trsvcid": "4420" 00:20:34.078 }, 00:20:34.078 "peer_address": { 00:20:34.078 "trtype": "TCP", 00:20:34.078 "adrfam": "IPv4", 00:20:34.078 "traddr": "10.0.0.1", 00:20:34.078 "trsvcid": "41258" 00:20:34.078 }, 00:20:34.078 "auth": { 00:20:34.078 "state": "completed", 00:20:34.078 "digest": "sha384", 00:20:34.078 "dhgroup": "ffdhe2048" 00:20:34.078 } 00:20:34.078 } 00:20:34.078 ]' 00:20:34.078 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.079 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.079 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.079 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.079 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.079 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.079 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.079 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.335 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:20:34.335 23:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:20:35.267 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.267 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.267 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.267 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.267 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.267 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.267 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.267 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.526 23:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:36.092 00:20:36.092 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.092 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.092 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.092 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.092 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.092 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.092 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.092 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.350 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.350 { 00:20:36.350 "cntlid": 63, 00:20:36.350 "qid": 0, 00:20:36.350 "state": "enabled", 00:20:36.350 "thread": "nvmf_tgt_poll_group_000", 00:20:36.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:36.350 "listen_address": { 00:20:36.350 "trtype": "TCP", 00:20:36.350 "adrfam": "IPv4", 00:20:36.350 "traddr": "10.0.0.2", 00:20:36.350 "trsvcid": "4420" 00:20:36.350 }, 00:20:36.350 "peer_address": { 00:20:36.350 "trtype": "TCP", 00:20:36.350 "adrfam": "IPv4", 00:20:36.350 "traddr": "10.0.0.1", 00:20:36.350 "trsvcid": "58944" 00:20:36.350 }, 00:20:36.350 "auth": { 00:20:36.350 "state": "completed", 00:20:36.350 "digest": "sha384", 00:20:36.350 "dhgroup": "ffdhe2048" 00:20:36.350 } 00:20:36.350 } 00:20:36.350 ]' 00:20:36.350 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.350 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.350 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.350 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:36.350 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.350 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.350 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.350 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.608 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:20:36.608 23:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:20:37.541 23:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.541 23:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.541 23:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.541 23:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.541 23:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.541 23:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.541 23:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.541 23:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:37.541 23:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.852 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.129 00:20:38.387 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.387 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.387 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.645 { 00:20:38.645 "cntlid": 65, 00:20:38.645 "qid": 0, 00:20:38.645 "state": "enabled", 00:20:38.645 "thread": "nvmf_tgt_poll_group_000", 00:20:38.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:38.645 "listen_address": { 00:20:38.645 "trtype": "TCP", 00:20:38.645 "adrfam": "IPv4", 00:20:38.645 "traddr": "10.0.0.2", 00:20:38.645 "trsvcid": "4420" 00:20:38.645 }, 00:20:38.645 "peer_address": { 00:20:38.645 "trtype": "TCP", 00:20:38.645 "adrfam": "IPv4", 00:20:38.645 "traddr": "10.0.0.1", 00:20:38.645 "trsvcid": "58964" 00:20:38.645 }, 00:20:38.645 "auth": { 00:20:38.645 "state": "completed", 00:20:38.645 "digest": "sha384", 00:20:38.645 "dhgroup": "ffdhe3072" 00:20:38.645 } 00:20:38.645 } 00:20:38.645 ]' 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.645 23:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.902 23:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:20:38.903 23:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:20:39.836 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.836 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.836 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.836 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.836 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.836 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.836 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:39.836 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.094 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:40.094 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.094 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.094 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:40.094 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:40.094 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.094 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.094 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.094 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.094 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.095 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.095 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.095 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.660 00:20:40.660 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.660 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.660 23:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.919 { 00:20:40.919 "cntlid": 67, 00:20:40.919 "qid": 0, 00:20:40.919 "state": "enabled", 00:20:40.919 "thread": "nvmf_tgt_poll_group_000", 00:20:40.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:40.919 "listen_address": { 00:20:40.919 "trtype": "TCP", 00:20:40.919 "adrfam": "IPv4", 00:20:40.919 "traddr": "10.0.0.2", 00:20:40.919 "trsvcid": "4420" 00:20:40.919 }, 00:20:40.919 "peer_address": { 00:20:40.919 "trtype": "TCP", 00:20:40.919 "adrfam": "IPv4", 00:20:40.919 "traddr": "10.0.0.1", 00:20:40.919 "trsvcid": "58984" 00:20:40.919 }, 00:20:40.919 "auth": { 00:20:40.919 "state": "completed", 00:20:40.919 "digest": "sha384", 00:20:40.919 "dhgroup": "ffdhe3072" 00:20:40.919 } 00:20:40.919 } 00:20:40.919 ]' 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.919 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.177 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:20:41.177 23:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:20:42.112 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.112 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.112 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.112 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.112 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.112 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.112 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.112 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.678 23:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.936 00:20:42.936 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.937 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.937 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.195 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.195 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.195 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.195 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.195 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.195 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.195 { 00:20:43.195 "cntlid": 69, 00:20:43.195 "qid": 0, 00:20:43.195 "state": "enabled", 00:20:43.195 "thread": "nvmf_tgt_poll_group_000", 00:20:43.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.195 "listen_address": { 00:20:43.195 "trtype": "TCP", 00:20:43.195 "adrfam": "IPv4", 00:20:43.195 "traddr": "10.0.0.2", 00:20:43.195 "trsvcid": "4420" 00:20:43.195 }, 00:20:43.195 "peer_address": { 00:20:43.195 "trtype": "TCP", 00:20:43.195 "adrfam": "IPv4", 00:20:43.195 "traddr": "10.0.0.1", 00:20:43.195 "trsvcid": "59022" 00:20:43.195 }, 00:20:43.195 "auth": { 00:20:43.195 "state": "completed", 00:20:43.195 "digest": "sha384", 00:20:43.195 "dhgroup": "ffdhe3072" 00:20:43.195 } 00:20:43.195 } 00:20:43.195 ]' 00:20:43.195 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.195 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.195 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.452 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:43.452 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.452 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.453 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.453 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.710 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:20:43.710 23:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:20:44.643 23:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.644 23:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.644 23:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.644 23:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.644 23:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.644 23:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.644 23:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.644 23:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.901 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:44.901 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.901 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.901 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:44.901 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:44.901 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.902 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:44.902 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.902 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.902 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.902 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:44.902 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.902 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.158 00:20:45.417 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.417 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.417 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.675 { 00:20:45.675 "cntlid": 71, 00:20:45.675 "qid": 0, 00:20:45.675 "state": "enabled", 00:20:45.675 "thread": "nvmf_tgt_poll_group_000", 00:20:45.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.675 "listen_address": { 00:20:45.675 "trtype": "TCP", 00:20:45.675 "adrfam": "IPv4", 00:20:45.675 "traddr": "10.0.0.2", 00:20:45.675 "trsvcid": "4420" 00:20:45.675 }, 00:20:45.675 "peer_address": { 00:20:45.675 "trtype": "TCP", 00:20:45.675 "adrfam": "IPv4", 00:20:45.675 "traddr": "10.0.0.1", 00:20:45.675 "trsvcid": "59048" 00:20:45.675 }, 00:20:45.675 "auth": { 00:20:45.675 "state": "completed", 00:20:45.675 "digest": "sha384", 00:20:45.675 "dhgroup": "ffdhe3072" 00:20:45.675 } 00:20:45.675 } 00:20:45.675 ]' 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.675 23:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.933 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:20:45.933 23:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:20:46.867 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.867 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.867 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.867 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.867 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.867 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.867 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.867 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.867 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.125 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.690 00:20:47.690 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.690 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.690 23:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.948 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.948 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.948 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.948 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.948 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.949 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.949 { 00:20:47.949 "cntlid": 73, 00:20:47.949 "qid": 0, 00:20:47.949 "state": "enabled", 00:20:47.949 "thread": "nvmf_tgt_poll_group_000", 00:20:47.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.949 "listen_address": { 00:20:47.949 "trtype": "TCP", 00:20:47.949 "adrfam": "IPv4", 00:20:47.949 "traddr": "10.0.0.2", 00:20:47.949 "trsvcid": "4420" 00:20:47.949 }, 00:20:47.949 "peer_address": { 00:20:47.949 "trtype": "TCP", 00:20:47.949 "adrfam": "IPv4", 00:20:47.949 "traddr": "10.0.0.1", 00:20:47.949 "trsvcid": "48488" 00:20:47.949 }, 00:20:47.949 "auth": { 00:20:47.949 "state": "completed", 00:20:47.949 "digest": "sha384", 00:20:47.949 "dhgroup": "ffdhe4096" 00:20:47.949 } 00:20:47.949 } 00:20:47.949 ]' 00:20:47.949 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.949 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.949 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.949 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.949 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.949 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.949 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.949 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.207 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:20:48.207 23:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.581 23:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.146 00:20:50.146 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.146 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.146 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.405 { 00:20:50.405 "cntlid": 75, 00:20:50.405 "qid": 0, 00:20:50.405 "state": "enabled", 00:20:50.405 "thread": "nvmf_tgt_poll_group_000", 00:20:50.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.405 "listen_address": { 00:20:50.405 "trtype": "TCP", 00:20:50.405 "adrfam": "IPv4", 00:20:50.405 "traddr": "10.0.0.2", 00:20:50.405 "trsvcid": "4420" 00:20:50.405 }, 00:20:50.405 "peer_address": { 00:20:50.405 "trtype": "TCP", 00:20:50.405 "adrfam": "IPv4", 00:20:50.405 "traddr": "10.0.0.1", 00:20:50.405 "trsvcid": "48506" 00:20:50.405 }, 00:20:50.405 "auth": { 00:20:50.405 "state": "completed", 00:20:50.405 "digest": "sha384", 00:20:50.405 "dhgroup": "ffdhe4096" 00:20:50.405 } 00:20:50.405 } 00:20:50.405 ]' 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.405 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.663 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:20:50.663 23:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:20:52.037 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.037 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.037 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.037 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.037 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.037 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.037 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.037 23:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.037 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.604 00:20:52.604 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.604 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.604 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.862 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.862 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.862 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.862 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.862 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.862 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.862 { 00:20:52.862 "cntlid": 77, 00:20:52.862 "qid": 0, 00:20:52.862 "state": "enabled", 00:20:52.862 "thread": "nvmf_tgt_poll_group_000", 00:20:52.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:52.862 "listen_address": { 00:20:52.862 "trtype": "TCP", 00:20:52.862 "adrfam": "IPv4", 00:20:52.862 "traddr": "10.0.0.2", 00:20:52.862 "trsvcid": "4420" 00:20:52.862 }, 00:20:52.862 "peer_address": { 00:20:52.862 "trtype": "TCP", 00:20:52.862 "adrfam": "IPv4", 00:20:52.862 "traddr": "10.0.0.1", 00:20:52.862 "trsvcid": "48532" 00:20:52.862 }, 00:20:52.862 "auth": { 00:20:52.862 "state": "completed", 00:20:52.862 "digest": "sha384", 00:20:52.862 "dhgroup": "ffdhe4096" 00:20:52.862 } 00:20:52.862 } 00:20:52.862 ]' 00:20:52.862 23:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.862 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.862 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.862 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.862 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.862 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.862 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.862 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.119 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:20:53.119 23:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:20:54.052 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.311 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.311 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.311 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.311 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.311 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.311 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.311 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.569 23:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.827 00:20:54.827 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.827 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.827 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.086 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.086 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.086 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.086 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.086 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.086 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.086 { 00:20:55.086 "cntlid": 79, 00:20:55.086 "qid": 0, 00:20:55.086 "state": "enabled", 00:20:55.086 "thread": "nvmf_tgt_poll_group_000", 00:20:55.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.086 "listen_address": { 00:20:55.086 "trtype": "TCP", 00:20:55.086 "adrfam": "IPv4", 00:20:55.086 "traddr": "10.0.0.2", 00:20:55.086 "trsvcid": "4420" 00:20:55.086 }, 00:20:55.086 "peer_address": { 00:20:55.086 "trtype": "TCP", 00:20:55.086 "adrfam": "IPv4", 00:20:55.086 "traddr": "10.0.0.1", 00:20:55.086 "trsvcid": "48552" 00:20:55.086 }, 00:20:55.086 "auth": { 00:20:55.086 "state": "completed", 00:20:55.086 "digest": "sha384", 00:20:55.086 "dhgroup": "ffdhe4096" 00:20:55.086 } 00:20:55.086 } 00:20:55.086 ]' 00:20:55.086 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.344 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.344 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.344 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:55.344 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.344 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.344 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.344 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.601 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:20:55.601 23:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:20:56.535 23:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.535 23:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.535 23:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.535 23:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.535 23:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.535 23:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.535 23:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.535 23:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.535 23:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.794 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.359 00:20:57.359 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.359 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.359 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.926 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.926 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.926 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.926 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.926 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.926 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.926 { 00:20:57.926 "cntlid": 81, 00:20:57.926 "qid": 0, 00:20:57.926 "state": "enabled", 00:20:57.926 "thread": "nvmf_tgt_poll_group_000", 00:20:57.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.926 "listen_address": { 00:20:57.926 "trtype": "TCP", 00:20:57.926 "adrfam": "IPv4", 00:20:57.926 "traddr": "10.0.0.2", 00:20:57.926 "trsvcid": "4420" 00:20:57.926 }, 00:20:57.926 "peer_address": { 00:20:57.926 "trtype": "TCP", 00:20:57.926 "adrfam": "IPv4", 00:20:57.926 "traddr": "10.0.0.1", 00:20:57.926 "trsvcid": "38406" 00:20:57.926 }, 00:20:57.926 "auth": { 00:20:57.926 "state": "completed", 00:20:57.926 "digest": "sha384", 00:20:57.926 "dhgroup": "ffdhe6144" 00:20:57.926 } 00:20:57.926 } 00:20:57.926 ]' 00:20:57.926 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.926 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.926 23:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.926 23:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:57.926 23:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.926 23:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.926 23:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.926 23:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.184 23:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:20:58.184 23:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:20:59.118 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.118 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.118 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.118 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.118 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.118 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.118 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.118 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.377 23:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.943 00:20:59.943 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.943 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.943 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.202 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.202 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.202 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.202 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.460 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.460 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.460 { 00:21:00.460 "cntlid": 83, 00:21:00.460 "qid": 0, 00:21:00.460 "state": "enabled", 00:21:00.460 "thread": "nvmf_tgt_poll_group_000", 00:21:00.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.460 "listen_address": { 00:21:00.460 "trtype": "TCP", 00:21:00.460 "adrfam": "IPv4", 00:21:00.460 "traddr": "10.0.0.2", 00:21:00.460 "trsvcid": "4420" 00:21:00.460 }, 00:21:00.460 "peer_address": { 00:21:00.460 "trtype": "TCP", 00:21:00.460 "adrfam": "IPv4", 00:21:00.460 "traddr": "10.0.0.1", 00:21:00.460 "trsvcid": "38440" 00:21:00.460 }, 00:21:00.460 "auth": { 00:21:00.460 "state": "completed", 00:21:00.460 "digest": "sha384", 00:21:00.460 "dhgroup": "ffdhe6144" 00:21:00.460 } 00:21:00.460 } 00:21:00.460 ]' 00:21:00.460 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.460 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.460 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.460 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:00.460 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.460 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.460 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.460 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.718 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:21:00.718 23:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:21:01.652 23:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.652 23:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.652 23:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.652 23:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.652 23:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.652 23:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.652 23:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.652 23:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.218 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.784 00:21:02.784 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.784 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.785 23:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.090 { 00:21:03.090 "cntlid": 85, 00:21:03.090 "qid": 0, 00:21:03.090 "state": "enabled", 00:21:03.090 "thread": "nvmf_tgt_poll_group_000", 00:21:03.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:03.090 "listen_address": { 00:21:03.090 "trtype": "TCP", 00:21:03.090 "adrfam": "IPv4", 00:21:03.090 "traddr": "10.0.0.2", 00:21:03.090 "trsvcid": "4420" 00:21:03.090 }, 00:21:03.090 "peer_address": { 00:21:03.090 "trtype": "TCP", 00:21:03.090 "adrfam": "IPv4", 00:21:03.090 "traddr": "10.0.0.1", 00:21:03.090 "trsvcid": "38484" 00:21:03.090 }, 00:21:03.090 "auth": { 00:21:03.090 "state": "completed", 00:21:03.090 "digest": "sha384", 00:21:03.090 "dhgroup": "ffdhe6144" 00:21:03.090 } 00:21:03.090 } 00:21:03.090 ]' 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.090 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.374 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:21:03.374 23:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:21:04.307 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.308 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.308 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.308 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.308 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.308 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.308 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:04.308 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.566 23:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.131 00:21:05.131 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.131 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.131 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.388 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.388 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.388 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.388 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.388 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.388 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.388 { 00:21:05.388 "cntlid": 87, 00:21:05.388 "qid": 0, 00:21:05.388 "state": "enabled", 00:21:05.388 "thread": "nvmf_tgt_poll_group_000", 00:21:05.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:05.388 "listen_address": { 00:21:05.388 "trtype": "TCP", 00:21:05.388 "adrfam": "IPv4", 00:21:05.388 "traddr": "10.0.0.2", 00:21:05.389 "trsvcid": "4420" 00:21:05.389 }, 00:21:05.389 "peer_address": { 00:21:05.389 "trtype": "TCP", 00:21:05.389 "adrfam": "IPv4", 00:21:05.389 "traddr": "10.0.0.1", 00:21:05.389 "trsvcid": "38520" 00:21:05.389 }, 00:21:05.389 "auth": { 00:21:05.389 "state": "completed", 00:21:05.389 "digest": "sha384", 00:21:05.389 "dhgroup": "ffdhe6144" 00:21:05.389 } 00:21:05.389 } 00:21:05.389 ]' 00:21:05.389 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.646 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.646 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.646 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.646 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.646 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.646 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.646 23:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.905 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:21:05.905 23:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:21:06.837 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.837 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.837 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.837 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.837 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.837 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.837 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.837 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.838 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.403 23:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.336 00:21:08.336 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.336 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.336 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.336 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.336 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.336 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.336 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.336 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.336 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.336 { 00:21:08.336 "cntlid": 89, 00:21:08.336 "qid": 0, 00:21:08.336 "state": "enabled", 00:21:08.336 "thread": "nvmf_tgt_poll_group_000", 00:21:08.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.336 "listen_address": { 00:21:08.336 "trtype": "TCP", 00:21:08.336 "adrfam": "IPv4", 00:21:08.336 "traddr": "10.0.0.2", 00:21:08.336 "trsvcid": "4420" 00:21:08.336 }, 00:21:08.336 "peer_address": { 00:21:08.336 "trtype": "TCP", 00:21:08.336 "adrfam": "IPv4", 00:21:08.336 "traddr": "10.0.0.1", 00:21:08.336 "trsvcid": "38918" 00:21:08.336 }, 00:21:08.336 "auth": { 00:21:08.336 "state": "completed", 00:21:08.336 "digest": "sha384", 00:21:08.336 "dhgroup": "ffdhe8192" 00:21:08.336 } 00:21:08.336 } 00:21:08.336 ]' 00:21:08.336 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.594 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.594 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.594 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:08.594 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.594 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.594 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.594 23:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.852 23:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:21:08.852 23:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:21:09.785 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.785 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.785 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.785 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.785 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.785 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.785 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:09.785 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:10.350 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:10.350 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.350 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.351 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:10.351 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:10.351 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.351 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.351 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.351 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.351 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.351 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.351 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.351 23:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.284 00:21:11.284 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.284 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.284 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.542 { 00:21:11.542 "cntlid": 91, 00:21:11.542 "qid": 0, 00:21:11.542 "state": "enabled", 00:21:11.542 "thread": "nvmf_tgt_poll_group_000", 00:21:11.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.542 "listen_address": { 00:21:11.542 "trtype": "TCP", 00:21:11.542 "adrfam": "IPv4", 00:21:11.542 "traddr": "10.0.0.2", 00:21:11.542 "trsvcid": "4420" 00:21:11.542 }, 00:21:11.542 "peer_address": { 00:21:11.542 "trtype": "TCP", 00:21:11.542 "adrfam": "IPv4", 00:21:11.542 "traddr": "10.0.0.1", 00:21:11.542 "trsvcid": "38940" 00:21:11.542 }, 00:21:11.542 "auth": { 00:21:11.542 "state": "completed", 00:21:11.542 "digest": "sha384", 00:21:11.542 "dhgroup": "ffdhe8192" 00:21:11.542 } 00:21:11.542 } 00:21:11.542 ]' 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.542 23:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.800 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:21:11.800 23:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:21:12.732 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.733 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.733 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.733 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.733 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.733 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.733 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.733 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:13.309 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:13.309 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.310 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.310 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:13.310 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:13.310 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.310 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.310 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.310 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.310 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.310 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.310 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.310 23:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.246 00:21:14.246 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.246 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.246 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.246 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.246 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.246 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.247 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.247 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.247 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.247 { 00:21:14.247 "cntlid": 93, 00:21:14.247 "qid": 0, 00:21:14.247 "state": "enabled", 00:21:14.247 "thread": "nvmf_tgt_poll_group_000", 00:21:14.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.247 "listen_address": { 00:21:14.247 "trtype": "TCP", 00:21:14.247 "adrfam": "IPv4", 00:21:14.247 "traddr": "10.0.0.2", 00:21:14.247 "trsvcid": "4420" 00:21:14.247 }, 00:21:14.247 "peer_address": { 00:21:14.247 "trtype": "TCP", 00:21:14.247 "adrfam": "IPv4", 00:21:14.247 "traddr": "10.0.0.1", 00:21:14.247 "trsvcid": "38980" 00:21:14.247 }, 00:21:14.247 "auth": { 00:21:14.247 "state": "completed", 00:21:14.247 "digest": "sha384", 00:21:14.247 "dhgroup": "ffdhe8192" 00:21:14.247 } 00:21:14.247 } 00:21:14.247 ]' 00:21:14.504 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.504 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.504 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.504 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:14.504 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.504 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.504 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.504 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.762 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:21:14.762 23:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:21:15.695 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.695 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.695 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.695 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.695 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.695 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.695 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.695 23:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.954 23:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.887 00:21:16.888 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.888 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.888 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.146 { 00:21:17.146 "cntlid": 95, 00:21:17.146 "qid": 0, 00:21:17.146 "state": "enabled", 00:21:17.146 "thread": "nvmf_tgt_poll_group_000", 00:21:17.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.146 "listen_address": { 00:21:17.146 "trtype": "TCP", 00:21:17.146 "adrfam": "IPv4", 00:21:17.146 "traddr": "10.0.0.2", 00:21:17.146 "trsvcid": "4420" 00:21:17.146 }, 00:21:17.146 "peer_address": { 00:21:17.146 "trtype": "TCP", 00:21:17.146 "adrfam": "IPv4", 00:21:17.146 "traddr": "10.0.0.1", 00:21:17.146 "trsvcid": "40336" 00:21:17.146 }, 00:21:17.146 "auth": { 00:21:17.146 "state": "completed", 00:21:17.146 "digest": "sha384", 00:21:17.146 "dhgroup": "ffdhe8192" 00:21:17.146 } 00:21:17.146 } 00:21:17.146 ]' 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.146 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.713 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:21:17.713 23:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:21:18.647 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.647 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.647 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.647 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.647 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.647 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:18.647 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.647 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.647 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:18.647 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.905 23:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.163 00:21:19.163 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.163 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.163 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.420 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.420 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.420 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.420 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.420 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.420 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.420 { 00:21:19.420 "cntlid": 97, 00:21:19.420 "qid": 0, 00:21:19.420 "state": "enabled", 00:21:19.420 "thread": "nvmf_tgt_poll_group_000", 00:21:19.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.420 "listen_address": { 00:21:19.420 "trtype": "TCP", 00:21:19.420 "adrfam": "IPv4", 00:21:19.420 "traddr": "10.0.0.2", 00:21:19.420 "trsvcid": "4420" 00:21:19.420 }, 00:21:19.420 "peer_address": { 00:21:19.420 "trtype": "TCP", 00:21:19.420 "adrfam": "IPv4", 00:21:19.420 "traddr": "10.0.0.1", 00:21:19.420 "trsvcid": "40374" 00:21:19.420 }, 00:21:19.420 "auth": { 00:21:19.420 "state": "completed", 00:21:19.420 "digest": "sha512", 00:21:19.420 "dhgroup": "null" 00:21:19.420 } 00:21:19.420 } 00:21:19.420 ]' 00:21:19.420 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.420 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.420 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.678 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:19.678 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.678 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.678 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.678 23:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.936 23:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:21:19.936 23:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:21:20.871 23:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.871 23:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.871 23:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.871 23:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.871 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.871 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.871 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:20.871 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.129 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.387 00:21:21.387 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.387 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.387 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.645 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.645 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.645 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.645 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.645 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.645 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.645 { 00:21:21.645 "cntlid": 99, 00:21:21.645 "qid": 0, 00:21:21.645 "state": "enabled", 00:21:21.645 "thread": "nvmf_tgt_poll_group_000", 00:21:21.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.645 "listen_address": { 00:21:21.645 "trtype": "TCP", 00:21:21.645 "adrfam": "IPv4", 00:21:21.645 "traddr": "10.0.0.2", 00:21:21.645 "trsvcid": "4420" 00:21:21.645 }, 00:21:21.645 "peer_address": { 00:21:21.645 "trtype": "TCP", 00:21:21.645 "adrfam": "IPv4", 00:21:21.645 "traddr": "10.0.0.1", 00:21:21.645 "trsvcid": "40398" 00:21:21.645 }, 00:21:21.645 "auth": { 00:21:21.645 "state": "completed", 00:21:21.645 "digest": "sha512", 00:21:21.645 "dhgroup": "null" 00:21:21.645 } 00:21:21.645 } 00:21:21.645 ]' 00:21:21.645 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.903 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.903 23:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.903 23:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:21.903 23:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.903 23:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.903 23:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.903 23:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.161 23:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:21:22.161 23:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:21:23.096 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.096 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.096 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.096 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.096 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.096 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.096 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.096 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.354 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.920 00:21:23.920 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.920 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.920 23:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.920 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.920 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.920 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.921 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.179 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.179 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.179 { 00:21:24.179 "cntlid": 101, 00:21:24.179 "qid": 0, 00:21:24.179 "state": "enabled", 00:21:24.179 "thread": "nvmf_tgt_poll_group_000", 00:21:24.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.179 "listen_address": { 00:21:24.179 "trtype": "TCP", 00:21:24.179 "adrfam": "IPv4", 00:21:24.179 "traddr": "10.0.0.2", 00:21:24.179 "trsvcid": "4420" 00:21:24.179 }, 00:21:24.179 "peer_address": { 00:21:24.179 "trtype": "TCP", 00:21:24.179 "adrfam": "IPv4", 00:21:24.179 "traddr": "10.0.0.1", 00:21:24.179 "trsvcid": "40418" 00:21:24.179 }, 00:21:24.179 "auth": { 00:21:24.179 "state": "completed", 00:21:24.179 "digest": "sha512", 00:21:24.179 "dhgroup": "null" 00:21:24.179 } 00:21:24.179 } 00:21:24.179 ]' 00:21:24.179 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.179 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.179 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.179 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:24.179 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.179 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.179 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.179 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.437 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:21:24.437 23:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:21:25.382 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.382 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.382 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.382 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.382 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.382 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.382 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.382 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.639 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:25.639 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.639 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.639 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:25.639 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:25.639 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.639 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:25.639 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.639 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.639 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.640 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:25.640 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.640 23:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.897 00:21:25.897 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.897 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.897 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.156 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.156 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.156 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.156 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.156 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.156 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.156 { 00:21:26.156 "cntlid": 103, 00:21:26.156 "qid": 0, 00:21:26.156 "state": "enabled", 00:21:26.156 "thread": "nvmf_tgt_poll_group_000", 00:21:26.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:26.156 "listen_address": { 00:21:26.156 "trtype": "TCP", 00:21:26.156 "adrfam": "IPv4", 00:21:26.156 "traddr": "10.0.0.2", 00:21:26.156 "trsvcid": "4420" 00:21:26.156 }, 00:21:26.156 "peer_address": { 00:21:26.156 "trtype": "TCP", 00:21:26.156 "adrfam": "IPv4", 00:21:26.156 "traddr": "10.0.0.1", 00:21:26.156 "trsvcid": "38200" 00:21:26.156 }, 00:21:26.156 "auth": { 00:21:26.156 "state": "completed", 00:21:26.156 "digest": "sha512", 00:21:26.156 "dhgroup": "null" 00:21:26.156 } 00:21:26.156 } 00:21:26.156 ]' 00:21:26.156 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.414 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.414 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.414 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:26.414 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.414 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.414 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.414 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.672 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:21:26.672 23:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:21:27.604 23:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.604 23:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.604 23:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.604 23:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.604 23:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.604 23:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.604 23:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.604 23:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:27.604 23:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.171 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.432 00:21:28.432 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.432 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.432 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.731 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.731 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.731 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.731 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.731 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.731 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.731 { 00:21:28.731 "cntlid": 105, 00:21:28.731 "qid": 0, 00:21:28.731 "state": "enabled", 00:21:28.731 "thread": "nvmf_tgt_poll_group_000", 00:21:28.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:28.731 "listen_address": { 00:21:28.731 "trtype": "TCP", 00:21:28.731 "adrfam": "IPv4", 00:21:28.731 "traddr": "10.0.0.2", 00:21:28.731 "trsvcid": "4420" 00:21:28.731 }, 00:21:28.731 "peer_address": { 00:21:28.731 "trtype": "TCP", 00:21:28.731 "adrfam": "IPv4", 00:21:28.731 "traddr": "10.0.0.1", 00:21:28.731 "trsvcid": "38232" 00:21:28.731 }, 00:21:28.731 "auth": { 00:21:28.731 "state": "completed", 00:21:28.731 "digest": "sha512", 00:21:28.731 "dhgroup": "ffdhe2048" 00:21:28.731 } 00:21:28.731 } 00:21:28.731 ]' 00:21:28.731 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.731 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.731 23:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.731 23:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:28.731 23:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.015 23:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.015 23:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.015 23:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.015 23:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:21:29.015 23:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:21:29.947 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.947 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.947 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.947 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.204 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.204 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.204 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.204 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.461 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:30.461 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.461 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.461 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:30.461 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:30.461 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.461 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.461 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.461 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.462 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.462 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.462 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.462 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.719 00:21:30.719 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.719 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.719 23:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.977 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.977 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.977 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.977 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.977 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.977 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.977 { 00:21:30.977 "cntlid": 107, 00:21:30.977 "qid": 0, 00:21:30.977 "state": "enabled", 00:21:30.977 "thread": "nvmf_tgt_poll_group_000", 00:21:30.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:30.977 "listen_address": { 00:21:30.977 "trtype": "TCP", 00:21:30.977 "adrfam": "IPv4", 00:21:30.977 "traddr": "10.0.0.2", 00:21:30.977 "trsvcid": "4420" 00:21:30.977 }, 00:21:30.977 "peer_address": { 00:21:30.977 "trtype": "TCP", 00:21:30.977 "adrfam": "IPv4", 00:21:30.977 "traddr": "10.0.0.1", 00:21:30.977 "trsvcid": "38270" 00:21:30.977 }, 00:21:30.977 "auth": { 00:21:30.977 "state": "completed", 00:21:30.977 "digest": "sha512", 00:21:30.977 "dhgroup": "ffdhe2048" 00:21:30.977 } 00:21:30.977 } 00:21:30.977 ]' 00:21:30.977 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.977 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.977 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.235 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:31.235 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.235 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.235 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.235 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.493 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:21:31.493 23:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:21:32.425 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.426 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.426 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.426 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.426 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.426 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.426 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.426 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.684 23:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.942 00:21:32.942 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.942 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.942 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.200 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.200 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.200 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.200 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.200 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.200 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.200 { 00:21:33.200 "cntlid": 109, 00:21:33.200 "qid": 0, 00:21:33.200 "state": "enabled", 00:21:33.200 "thread": "nvmf_tgt_poll_group_000", 00:21:33.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.200 "listen_address": { 00:21:33.200 "trtype": "TCP", 00:21:33.200 "adrfam": "IPv4", 00:21:33.200 "traddr": "10.0.0.2", 00:21:33.200 "trsvcid": "4420" 00:21:33.200 }, 00:21:33.200 "peer_address": { 00:21:33.200 "trtype": "TCP", 00:21:33.200 "adrfam": "IPv4", 00:21:33.200 "traddr": "10.0.0.1", 00:21:33.200 "trsvcid": "38312" 00:21:33.200 }, 00:21:33.200 "auth": { 00:21:33.200 "state": "completed", 00:21:33.200 "digest": "sha512", 00:21:33.200 "dhgroup": "ffdhe2048" 00:21:33.200 } 00:21:33.200 } 00:21:33.200 ]' 00:21:33.200 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.458 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.458 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.458 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:33.458 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.458 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.458 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.458 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.716 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:21:33.716 23:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:21:34.648 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.648 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.648 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.648 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.648 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.648 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.648 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.648 23:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.905 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.163 00:21:35.420 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.420 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.420 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.677 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.677 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.677 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.678 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.678 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.678 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.678 { 00:21:35.678 "cntlid": 111, 00:21:35.678 "qid": 0, 00:21:35.678 "state": "enabled", 00:21:35.678 "thread": "nvmf_tgt_poll_group_000", 00:21:35.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.678 "listen_address": { 00:21:35.678 "trtype": "TCP", 00:21:35.678 "adrfam": "IPv4", 00:21:35.678 "traddr": "10.0.0.2", 00:21:35.678 "trsvcid": "4420" 00:21:35.678 }, 00:21:35.678 "peer_address": { 00:21:35.678 "trtype": "TCP", 00:21:35.678 "adrfam": "IPv4", 00:21:35.678 "traddr": "10.0.0.1", 00:21:35.678 "trsvcid": "38346" 00:21:35.678 }, 00:21:35.678 "auth": { 00:21:35.678 "state": "completed", 00:21:35.678 "digest": "sha512", 00:21:35.678 "dhgroup": "ffdhe2048" 00:21:35.678 } 00:21:35.678 } 00:21:35.678 ]' 00:21:35.678 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.678 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.678 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.678 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:35.678 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.678 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.678 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.678 23:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.936 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:21:35.936 23:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:21:36.868 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.868 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.868 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.868 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.868 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.868 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.868 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.868 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.868 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.126 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:37.126 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.126 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.126 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:37.126 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:37.127 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.127 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.127 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.127 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.127 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.127 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.127 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.127 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.693 00:21:37.693 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.693 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.693 23:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.952 { 00:21:37.952 "cntlid": 113, 00:21:37.952 "qid": 0, 00:21:37.952 "state": "enabled", 00:21:37.952 "thread": "nvmf_tgt_poll_group_000", 00:21:37.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:37.952 "listen_address": { 00:21:37.952 "trtype": "TCP", 00:21:37.952 "adrfam": "IPv4", 00:21:37.952 "traddr": "10.0.0.2", 00:21:37.952 "trsvcid": "4420" 00:21:37.952 }, 00:21:37.952 "peer_address": { 00:21:37.952 "trtype": "TCP", 00:21:37.952 "adrfam": "IPv4", 00:21:37.952 "traddr": "10.0.0.1", 00:21:37.952 "trsvcid": "48784" 00:21:37.952 }, 00:21:37.952 "auth": { 00:21:37.952 "state": "completed", 00:21:37.952 "digest": "sha512", 00:21:37.952 "dhgroup": "ffdhe3072" 00:21:37.952 } 00:21:37.952 } 00:21:37.952 ]' 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.952 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.210 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:21:38.210 23:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:21:39.144 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.144 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.144 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.144 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.144 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.144 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.144 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.144 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.711 23:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.969 00:21:39.969 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.969 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.969 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.228 { 00:21:40.228 "cntlid": 115, 00:21:40.228 "qid": 0, 00:21:40.228 "state": "enabled", 00:21:40.228 "thread": "nvmf_tgt_poll_group_000", 00:21:40.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.228 "listen_address": { 00:21:40.228 "trtype": "TCP", 00:21:40.228 "adrfam": "IPv4", 00:21:40.228 "traddr": "10.0.0.2", 00:21:40.228 "trsvcid": "4420" 00:21:40.228 }, 00:21:40.228 "peer_address": { 00:21:40.228 "trtype": "TCP", 00:21:40.228 "adrfam": "IPv4", 00:21:40.228 "traddr": "10.0.0.1", 00:21:40.228 "trsvcid": "48804" 00:21:40.228 }, 00:21:40.228 "auth": { 00:21:40.228 "state": "completed", 00:21:40.228 "digest": "sha512", 00:21:40.228 "dhgroup": "ffdhe3072" 00:21:40.228 } 00:21:40.228 } 00:21:40.228 ]' 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.228 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.486 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:21:40.486 23:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:21:41.421 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.421 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.421 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.421 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.421 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.421 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.421 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:41.421 23:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.987 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.246 00:21:42.246 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.246 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.246 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.504 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.504 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.504 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.504 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.504 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.504 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.504 { 00:21:42.504 "cntlid": 117, 00:21:42.504 "qid": 0, 00:21:42.504 "state": "enabled", 00:21:42.504 "thread": "nvmf_tgt_poll_group_000", 00:21:42.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:42.504 "listen_address": { 00:21:42.504 "trtype": "TCP", 00:21:42.504 "adrfam": "IPv4", 00:21:42.504 "traddr": "10.0.0.2", 00:21:42.504 "trsvcid": "4420" 00:21:42.504 }, 00:21:42.504 "peer_address": { 00:21:42.504 "trtype": "TCP", 00:21:42.504 "adrfam": "IPv4", 00:21:42.504 "traddr": "10.0.0.1", 00:21:42.504 "trsvcid": "48836" 00:21:42.504 }, 00:21:42.504 "auth": { 00:21:42.504 "state": "completed", 00:21:42.504 "digest": "sha512", 00:21:42.504 "dhgroup": "ffdhe3072" 00:21:42.504 } 00:21:42.504 } 00:21:42.504 ]' 00:21:42.504 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.762 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.762 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.763 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:42.763 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.763 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.763 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.763 23:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.021 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:21:43.021 23:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:21:43.955 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.955 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.955 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.955 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.955 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.955 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.955 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:43.955 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.213 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.780 00:21:44.780 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.780 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.780 23:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.057 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.057 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.057 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.058 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.058 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.058 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.058 { 00:21:45.058 "cntlid": 119, 00:21:45.058 "qid": 0, 00:21:45.058 "state": "enabled", 00:21:45.058 "thread": "nvmf_tgt_poll_group_000", 00:21:45.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.058 "listen_address": { 00:21:45.058 "trtype": "TCP", 00:21:45.058 "adrfam": "IPv4", 00:21:45.058 "traddr": "10.0.0.2", 00:21:45.058 "trsvcid": "4420" 00:21:45.058 }, 00:21:45.058 "peer_address": { 00:21:45.058 "trtype": "TCP", 00:21:45.058 "adrfam": "IPv4", 00:21:45.058 "traddr": "10.0.0.1", 00:21:45.058 "trsvcid": "48866" 00:21:45.058 }, 00:21:45.058 "auth": { 00:21:45.058 "state": "completed", 00:21:45.058 "digest": "sha512", 00:21:45.058 "dhgroup": "ffdhe3072" 00:21:45.058 } 00:21:45.058 } 00:21:45.058 ]' 00:21:45.058 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.058 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.058 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.058 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:45.059 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.059 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.059 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.059 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.321 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:21:45.321 23:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:21:46.253 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.253 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.253 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.253 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.253 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.253 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.253 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.253 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:46.253 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:46.818 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:46.819 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.819 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.819 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:46.819 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:46.819 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.819 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.819 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.819 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.819 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.819 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.819 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.819 23:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.076 00:21:47.076 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.076 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.076 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.334 { 00:21:47.334 "cntlid": 121, 00:21:47.334 "qid": 0, 00:21:47.334 "state": "enabled", 00:21:47.334 "thread": "nvmf_tgt_poll_group_000", 00:21:47.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:47.334 "listen_address": { 00:21:47.334 "trtype": "TCP", 00:21:47.334 "adrfam": "IPv4", 00:21:47.334 "traddr": "10.0.0.2", 00:21:47.334 "trsvcid": "4420" 00:21:47.334 }, 00:21:47.334 "peer_address": { 00:21:47.334 "trtype": "TCP", 00:21:47.334 "adrfam": "IPv4", 00:21:47.334 "traddr": "10.0.0.1", 00:21:47.334 "trsvcid": "39278" 00:21:47.334 }, 00:21:47.334 "auth": { 00:21:47.334 "state": "completed", 00:21:47.334 "digest": "sha512", 00:21:47.334 "dhgroup": "ffdhe4096" 00:21:47.334 } 00:21:47.334 } 00:21:47.334 ]' 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.334 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.902 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:21:47.902 23:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:21:48.835 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.835 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.835 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.835 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.835 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.835 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.835 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:48.835 23:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.094 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.352 00:21:49.352 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.352 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.352 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.610 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.610 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.610 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.610 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.610 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.610 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.610 { 00:21:49.610 "cntlid": 123, 00:21:49.610 "qid": 0, 00:21:49.610 "state": "enabled", 00:21:49.610 "thread": "nvmf_tgt_poll_group_000", 00:21:49.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.610 "listen_address": { 00:21:49.610 "trtype": "TCP", 00:21:49.610 "adrfam": "IPv4", 00:21:49.610 "traddr": "10.0.0.2", 00:21:49.610 "trsvcid": "4420" 00:21:49.610 }, 00:21:49.610 "peer_address": { 00:21:49.610 "trtype": "TCP", 00:21:49.610 "adrfam": "IPv4", 00:21:49.610 "traddr": "10.0.0.1", 00:21:49.610 "trsvcid": "39310" 00:21:49.610 }, 00:21:49.610 "auth": { 00:21:49.610 "state": "completed", 00:21:49.610 "digest": "sha512", 00:21:49.610 "dhgroup": "ffdhe4096" 00:21:49.610 } 00:21:49.610 } 00:21:49.610 ]' 00:21:49.610 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.610 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.610 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.903 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:49.903 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.903 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.903 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.903 23:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.217 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:21:50.217 23:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:21:51.176 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.176 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.176 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.176 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.176 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.176 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.176 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.176 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.434 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.691 00:21:51.691 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.691 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.692 23:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.949 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.949 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.949 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.949 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.949 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.949 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.949 { 00:21:51.949 "cntlid": 125, 00:21:51.949 "qid": 0, 00:21:51.949 "state": "enabled", 00:21:51.949 "thread": "nvmf_tgt_poll_group_000", 00:21:51.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.949 "listen_address": { 00:21:51.949 "trtype": "TCP", 00:21:51.949 "adrfam": "IPv4", 00:21:51.949 "traddr": "10.0.0.2", 00:21:51.949 "trsvcid": "4420" 00:21:51.949 }, 00:21:51.949 "peer_address": { 00:21:51.949 "trtype": "TCP", 00:21:51.949 "adrfam": "IPv4", 00:21:51.949 "traddr": "10.0.0.1", 00:21:51.949 "trsvcid": "39336" 00:21:51.949 }, 00:21:51.949 "auth": { 00:21:51.949 "state": "completed", 00:21:51.949 "digest": "sha512", 00:21:51.949 "dhgroup": "ffdhe4096" 00:21:51.949 } 00:21:51.949 } 00:21:51.949 ]' 00:21:51.949 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.207 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.207 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.207 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:52.207 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.207 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.207 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.207 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.466 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:21:52.466 23:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:21:53.396 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.396 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.397 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.397 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.397 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.397 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.397 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.397 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.654 23:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.218 00:21:54.218 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.218 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.218 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.474 { 00:21:54.474 "cntlid": 127, 00:21:54.474 "qid": 0, 00:21:54.474 "state": "enabled", 00:21:54.474 "thread": "nvmf_tgt_poll_group_000", 00:21:54.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:54.474 "listen_address": { 00:21:54.474 "trtype": "TCP", 00:21:54.474 "adrfam": "IPv4", 00:21:54.474 "traddr": "10.0.0.2", 00:21:54.474 "trsvcid": "4420" 00:21:54.474 }, 00:21:54.474 "peer_address": { 00:21:54.474 "trtype": "TCP", 00:21:54.474 "adrfam": "IPv4", 00:21:54.474 "traddr": "10.0.0.1", 00:21:54.474 "trsvcid": "39364" 00:21:54.474 }, 00:21:54.474 "auth": { 00:21:54.474 "state": "completed", 00:21:54.474 "digest": "sha512", 00:21:54.474 "dhgroup": "ffdhe4096" 00:21:54.474 } 00:21:54.474 } 00:21:54.474 ]' 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.474 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.731 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:21:54.731 23:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:21:55.667 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.667 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.667 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.667 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.667 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.667 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.667 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.667 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:55.667 23:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.233 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.798 00:21:56.798 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.798 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.798 23:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.056 { 00:21:57.056 "cntlid": 129, 00:21:57.056 "qid": 0, 00:21:57.056 "state": "enabled", 00:21:57.056 "thread": "nvmf_tgt_poll_group_000", 00:21:57.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.056 "listen_address": { 00:21:57.056 "trtype": "TCP", 00:21:57.056 "adrfam": "IPv4", 00:21:57.056 "traddr": "10.0.0.2", 00:21:57.056 "trsvcid": "4420" 00:21:57.056 }, 00:21:57.056 "peer_address": { 00:21:57.056 "trtype": "TCP", 00:21:57.056 "adrfam": "IPv4", 00:21:57.056 "traddr": "10.0.0.1", 00:21:57.056 "trsvcid": "57742" 00:21:57.056 }, 00:21:57.056 "auth": { 00:21:57.056 "state": "completed", 00:21:57.056 "digest": "sha512", 00:21:57.056 "dhgroup": "ffdhe6144" 00:21:57.056 } 00:21:57.056 } 00:21:57.056 ]' 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.056 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.314 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:21:57.314 23:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.688 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.689 23:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.254 00:21:59.254 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.254 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.254 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.511 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.511 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.511 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.511 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.511 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.511 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.511 { 00:21:59.511 "cntlid": 131, 00:21:59.511 "qid": 0, 00:21:59.511 "state": "enabled", 00:21:59.511 "thread": "nvmf_tgt_poll_group_000", 00:21:59.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:59.511 "listen_address": { 00:21:59.511 "trtype": "TCP", 00:21:59.511 "adrfam": "IPv4", 00:21:59.511 "traddr": "10.0.0.2", 00:21:59.511 "trsvcid": "4420" 00:21:59.511 }, 00:21:59.511 "peer_address": { 00:21:59.511 "trtype": "TCP", 00:21:59.511 "adrfam": "IPv4", 00:21:59.511 "traddr": "10.0.0.1", 00:21:59.511 "trsvcid": "57772" 00:21:59.511 }, 00:21:59.511 "auth": { 00:21:59.511 "state": "completed", 00:21:59.511 "digest": "sha512", 00:21:59.511 "dhgroup": "ffdhe6144" 00:21:59.511 } 00:21:59.511 } 00:21:59.511 ]' 00:21:59.511 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.769 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.769 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.769 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.769 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.769 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.769 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.769 23:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.027 23:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:22:00.027 23:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:22:00.959 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.959 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.959 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.959 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.959 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.959 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.959 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:00.959 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.217 23:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.783 00:22:01.783 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.783 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.783 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.041 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.041 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.041 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.041 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.041 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.041 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.041 { 00:22:02.041 "cntlid": 133, 00:22:02.041 "qid": 0, 00:22:02.041 "state": "enabled", 00:22:02.041 "thread": "nvmf_tgt_poll_group_000", 00:22:02.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.041 "listen_address": { 00:22:02.041 "trtype": "TCP", 00:22:02.041 "adrfam": "IPv4", 00:22:02.041 "traddr": "10.0.0.2", 00:22:02.041 "trsvcid": "4420" 00:22:02.041 }, 00:22:02.041 "peer_address": { 00:22:02.041 "trtype": "TCP", 00:22:02.041 "adrfam": "IPv4", 00:22:02.041 "traddr": "10.0.0.1", 00:22:02.041 "trsvcid": "57808" 00:22:02.041 }, 00:22:02.041 "auth": { 00:22:02.041 "state": "completed", 00:22:02.041 "digest": "sha512", 00:22:02.041 "dhgroup": "ffdhe6144" 00:22:02.041 } 00:22:02.041 } 00:22:02.041 ]' 00:22:02.041 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.298 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.298 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.298 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.299 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.299 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.299 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.299 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.556 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:22:02.557 23:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:22:03.489 23:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.489 23:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.489 23:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.489 23:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.489 23:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.489 23:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.489 23:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.489 23:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.746 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.680 00:22:04.680 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.680 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.680 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.680 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.680 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.680 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.680 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.680 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.680 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.680 { 00:22:04.680 "cntlid": 135, 00:22:04.680 "qid": 0, 00:22:04.680 "state": "enabled", 00:22:04.680 "thread": "nvmf_tgt_poll_group_000", 00:22:04.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.680 "listen_address": { 00:22:04.680 "trtype": "TCP", 00:22:04.680 "adrfam": "IPv4", 00:22:04.680 "traddr": "10.0.0.2", 00:22:04.680 "trsvcid": "4420" 00:22:04.680 }, 00:22:04.680 "peer_address": { 00:22:04.680 "trtype": "TCP", 00:22:04.680 "adrfam": "IPv4", 00:22:04.680 "traddr": "10.0.0.1", 00:22:04.680 "trsvcid": "57840" 00:22:04.681 }, 00:22:04.681 "auth": { 00:22:04.681 "state": "completed", 00:22:04.681 "digest": "sha512", 00:22:04.681 "dhgroup": "ffdhe6144" 00:22:04.681 } 00:22:04.681 } 00:22:04.681 ]' 00:22:04.681 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.939 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.939 23:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.939 23:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:04.939 23:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.939 23:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.939 23:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.939 23:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.196 23:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:22:05.196 23:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:22:06.129 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.129 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.129 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.129 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.129 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.129 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.129 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.129 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.129 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.387 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:06.387 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.388 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.388 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:06.388 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:06.388 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.388 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.388 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.388 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.388 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.388 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.388 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.388 23:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.321 00:22:07.321 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.321 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.321 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.579 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.579 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.579 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.579 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.579 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.579 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.579 { 00:22:07.579 "cntlid": 137, 00:22:07.579 "qid": 0, 00:22:07.579 "state": "enabled", 00:22:07.579 "thread": "nvmf_tgt_poll_group_000", 00:22:07.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.579 "listen_address": { 00:22:07.579 "trtype": "TCP", 00:22:07.579 "adrfam": "IPv4", 00:22:07.579 "traddr": "10.0.0.2", 00:22:07.579 "trsvcid": "4420" 00:22:07.579 }, 00:22:07.579 "peer_address": { 00:22:07.579 "trtype": "TCP", 00:22:07.579 "adrfam": "IPv4", 00:22:07.579 "traddr": "10.0.0.1", 00:22:07.579 "trsvcid": "59724" 00:22:07.579 }, 00:22:07.579 "auth": { 00:22:07.579 "state": "completed", 00:22:07.579 "digest": "sha512", 00:22:07.579 "dhgroup": "ffdhe8192" 00:22:07.579 } 00:22:07.579 } 00:22:07.579 ]' 00:22:07.579 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.837 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.837 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.837 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.837 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.837 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.837 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.837 23:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.095 23:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:22:08.095 23:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:22:09.028 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.028 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.028 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.028 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.028 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.028 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.028 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.028 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.286 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:09.286 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.286 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.286 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:09.286 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.286 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.286 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.286 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.286 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.543 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.543 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.544 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.544 23:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.477 00:22:10.477 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.477 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.477 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.735 { 00:22:10.735 "cntlid": 139, 00:22:10.735 "qid": 0, 00:22:10.735 "state": "enabled", 00:22:10.735 "thread": "nvmf_tgt_poll_group_000", 00:22:10.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.735 "listen_address": { 00:22:10.735 "trtype": "TCP", 00:22:10.735 "adrfam": "IPv4", 00:22:10.735 "traddr": "10.0.0.2", 00:22:10.735 "trsvcid": "4420" 00:22:10.735 }, 00:22:10.735 "peer_address": { 00:22:10.735 "trtype": "TCP", 00:22:10.735 "adrfam": "IPv4", 00:22:10.735 "traddr": "10.0.0.1", 00:22:10.735 "trsvcid": "59758" 00:22:10.735 }, 00:22:10.735 "auth": { 00:22:10.735 "state": "completed", 00:22:10.735 "digest": "sha512", 00:22:10.735 "dhgroup": "ffdhe8192" 00:22:10.735 } 00:22:10.735 } 00:22:10.735 ]' 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.735 23:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.992 23:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:22:10.992 23:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: --dhchap-ctrl-secret DHHC-1:02:ZTVlZmQzNTJmM2FlYWE5MDFiM2I4ZGQ5NmI1ZGFlODZhODMwMzQ2NWZhNWRhMDlk0Xq8qA==: 00:22:11.931 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.931 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.931 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.931 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.931 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.931 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.931 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.931 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:12.189 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:12.189 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.189 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.189 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:12.189 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:12.189 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.189 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.189 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.189 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.189 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.447 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.447 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.447 23:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.381 00:22:13.381 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.381 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.381 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.381 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.381 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.382 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.382 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.640 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.640 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.640 { 00:22:13.640 "cntlid": 141, 00:22:13.640 "qid": 0, 00:22:13.640 "state": "enabled", 00:22:13.640 "thread": "nvmf_tgt_poll_group_000", 00:22:13.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.640 "listen_address": { 00:22:13.640 "trtype": "TCP", 00:22:13.640 "adrfam": "IPv4", 00:22:13.640 "traddr": "10.0.0.2", 00:22:13.640 "trsvcid": "4420" 00:22:13.640 }, 00:22:13.640 "peer_address": { 00:22:13.640 "trtype": "TCP", 00:22:13.640 "adrfam": "IPv4", 00:22:13.640 "traddr": "10.0.0.1", 00:22:13.640 "trsvcid": "59794" 00:22:13.640 }, 00:22:13.640 "auth": { 00:22:13.640 "state": "completed", 00:22:13.640 "digest": "sha512", 00:22:13.640 "dhgroup": "ffdhe8192" 00:22:13.640 } 00:22:13.640 } 00:22:13.640 ]' 00:22:13.640 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.640 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.640 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.640 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.640 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.640 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.640 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.640 23:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.898 23:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:22:13.898 23:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:01:N2U4ODNmMjM3OWMwOGYxMjRiZGQ0ZDYyNzNmNDY4ZDYp1JFI: 00:22:14.831 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.831 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.831 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.831 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.090 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.090 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.090 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:15.090 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.348 23:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.281 00:22:16.281 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.281 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.282 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.539 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.539 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.539 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.539 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.539 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.539 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.539 { 00:22:16.539 "cntlid": 143, 00:22:16.539 "qid": 0, 00:22:16.539 "state": "enabled", 00:22:16.539 "thread": "nvmf_tgt_poll_group_000", 00:22:16.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:16.539 "listen_address": { 00:22:16.539 "trtype": "TCP", 00:22:16.539 "adrfam": "IPv4", 00:22:16.539 "traddr": "10.0.0.2", 00:22:16.539 "trsvcid": "4420" 00:22:16.539 }, 00:22:16.539 "peer_address": { 00:22:16.539 "trtype": "TCP", 00:22:16.539 "adrfam": "IPv4", 00:22:16.539 "traddr": "10.0.0.1", 00:22:16.539 "trsvcid": "42930" 00:22:16.539 }, 00:22:16.539 "auth": { 00:22:16.539 "state": "completed", 00:22:16.539 "digest": "sha512", 00:22:16.539 "dhgroup": "ffdhe8192" 00:22:16.539 } 00:22:16.539 } 00:22:16.539 ]' 00:22:16.539 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.539 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.539 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.539 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:16.539 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.540 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.540 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.540 23:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.797 23:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:22:16.797 23:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:22:17.730 23:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.730 23:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.730 23:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.730 23:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.730 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.730 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:17.730 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:17.730 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:17.730 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.730 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.730 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.988 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:17.988 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.988 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.988 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:17.988 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:17.988 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.988 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.988 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.988 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.988 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.988 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.988 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.989 23:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.922 00:22:18.922 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.922 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.922 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.180 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.180 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.180 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.180 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.180 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.180 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.180 { 00:22:19.180 "cntlid": 145, 00:22:19.180 "qid": 0, 00:22:19.180 "state": "enabled", 00:22:19.180 "thread": "nvmf_tgt_poll_group_000", 00:22:19.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:19.180 "listen_address": { 00:22:19.180 "trtype": "TCP", 00:22:19.180 "adrfam": "IPv4", 00:22:19.180 "traddr": "10.0.0.2", 00:22:19.180 "trsvcid": "4420" 00:22:19.180 }, 00:22:19.180 "peer_address": { 00:22:19.180 "trtype": "TCP", 00:22:19.180 "adrfam": "IPv4", 00:22:19.180 "traddr": "10.0.0.1", 00:22:19.180 "trsvcid": "42958" 00:22:19.180 }, 00:22:19.180 "auth": { 00:22:19.180 "state": "completed", 00:22:19.180 "digest": "sha512", 00:22:19.180 "dhgroup": "ffdhe8192" 00:22:19.180 } 00:22:19.180 } 00:22:19.180 ]' 00:22:19.180 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.180 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.180 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.438 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.438 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.438 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.438 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.438 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.696 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:22:19.696 23:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGYzN2JkZWNlOWYwODE4MDM2NzI5MjVlNWE5MTE5ZTY3MWVkMzlhNDYwNzJjODJmkjMpnw==: --dhchap-ctrl-secret DHHC-1:03:MGE5NzY0OTI4ZjUzN2QxNWU3YzkyNzI1YTQyZDY1YmRlMTFlMDE4MGNjN2JlN2Q3Mzk3YjA5OTFlYTRiZmQyNCJsREQ=: 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:20.630 23:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:21.563 request: 00:22:21.563 { 00:22:21.563 "name": "nvme0", 00:22:21.563 "trtype": "tcp", 00:22:21.563 "traddr": "10.0.0.2", 00:22:21.563 "adrfam": "ipv4", 00:22:21.563 "trsvcid": "4420", 00:22:21.563 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.563 "prchk_reftag": false, 00:22:21.563 "prchk_guard": false, 00:22:21.563 "hdgst": false, 00:22:21.563 "ddgst": false, 00:22:21.563 "dhchap_key": "key2", 00:22:21.563 "allow_unrecognized_csi": false, 00:22:21.563 "method": "bdev_nvme_attach_controller", 00:22:21.563 "req_id": 1 00:22:21.563 } 00:22:21.563 Got JSON-RPC error response 00:22:21.563 response: 00:22:21.563 { 00:22:21.563 "code": -5, 00:22:21.563 "message": "Input/output error" 00:22:21.563 } 00:22:21.563 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:21.563 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:21.563 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.564 23:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:22.497 request: 00:22:22.497 { 00:22:22.497 "name": "nvme0", 00:22:22.497 "trtype": "tcp", 00:22:22.497 "traddr": "10.0.0.2", 00:22:22.497 "adrfam": "ipv4", 00:22:22.497 "trsvcid": "4420", 00:22:22.497 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:22.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:22.497 "prchk_reftag": false, 00:22:22.497 "prchk_guard": false, 00:22:22.497 "hdgst": false, 00:22:22.497 "ddgst": false, 00:22:22.497 "dhchap_key": "key1", 00:22:22.497 "dhchap_ctrlr_key": "ckey2", 00:22:22.497 "allow_unrecognized_csi": false, 00:22:22.497 "method": "bdev_nvme_attach_controller", 00:22:22.497 "req_id": 1 00:22:22.497 } 00:22:22.497 Got JSON-RPC error response 00:22:22.497 response: 00:22:22.497 { 00:22:22.497 "code": -5, 00:22:22.497 "message": "Input/output error" 00:22:22.497 } 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.497 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:22.498 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.498 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:22.498 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.498 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.498 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.498 23:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.432 request: 00:22:23.432 { 00:22:23.432 "name": "nvme0", 00:22:23.432 "trtype": "tcp", 00:22:23.432 "traddr": "10.0.0.2", 00:22:23.432 "adrfam": "ipv4", 00:22:23.432 "trsvcid": "4420", 00:22:23.432 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:23.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:23.432 "prchk_reftag": false, 00:22:23.432 "prchk_guard": false, 00:22:23.432 "hdgst": false, 00:22:23.432 "ddgst": false, 00:22:23.432 "dhchap_key": "key1", 00:22:23.432 "dhchap_ctrlr_key": "ckey1", 00:22:23.432 "allow_unrecognized_csi": false, 00:22:23.432 "method": "bdev_nvme_attach_controller", 00:22:23.432 "req_id": 1 00:22:23.432 } 00:22:23.432 Got JSON-RPC error response 00:22:23.432 response: 00:22:23.432 { 00:22:23.432 "code": -5, 00:22:23.432 "message": "Input/output error" 00:22:23.432 } 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 179114 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 179114 ']' 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 179114 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.432 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 179114 00:22:23.433 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.433 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.433 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 179114' 00:22:23.433 killing process with pid 179114 00:22:23.433 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 179114 00:22:23.433 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 179114 00:22:23.690 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:23.690 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.690 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.690 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.690 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=202463 00:22:23.690 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:23.691 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 202463 00:22:23.691 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 202463 ']' 00:22:23.691 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.691 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.691 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.691 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.691 23:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.948 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.948 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:23.948 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.948 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.948 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.948 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.949 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:23.949 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 202463 00:22:23.949 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 202463 ']' 00:22:23.949 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.949 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.949 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.949 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.949 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.207 null0 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Icn 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.dXC ]] 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dXC 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Pex 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.RmZ ]] 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RmZ 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.207 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.a1q 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.uGc ]] 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uGc 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.r19 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.465 23:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.836 nvme0n1 00:22:25.836 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.836 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.836 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.094 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.094 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.094 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.094 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.094 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.094 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.094 { 00:22:26.094 "cntlid": 1, 00:22:26.094 "qid": 0, 00:22:26.094 "state": "enabled", 00:22:26.094 "thread": "nvmf_tgt_poll_group_000", 00:22:26.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.094 "listen_address": { 00:22:26.094 "trtype": "TCP", 00:22:26.094 "adrfam": "IPv4", 00:22:26.094 "traddr": "10.0.0.2", 00:22:26.094 "trsvcid": "4420" 00:22:26.094 }, 00:22:26.094 "peer_address": { 00:22:26.094 "trtype": "TCP", 00:22:26.094 "adrfam": "IPv4", 00:22:26.094 "traddr": "10.0.0.1", 00:22:26.094 "trsvcid": "42992" 00:22:26.094 }, 00:22:26.094 "auth": { 00:22:26.094 "state": "completed", 00:22:26.094 "digest": "sha512", 00:22:26.094 "dhgroup": "ffdhe8192" 00:22:26.094 } 00:22:26.094 } 00:22:26.094 ]' 00:22:26.094 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.094 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.094 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.094 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:26.094 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.352 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.352 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.352 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.609 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:22:26.610 23:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:22:27.542 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.542 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.542 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.542 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.542 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.542 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:27.542 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.542 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.542 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.542 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:27.543 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:27.800 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:27.801 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:27.801 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:27.801 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:27.801 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.801 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:27.801 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.801 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:27.801 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.801 23:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.058 request: 00:22:28.058 { 00:22:28.058 "name": "nvme0", 00:22:28.058 "trtype": "tcp", 00:22:28.058 "traddr": "10.0.0.2", 00:22:28.058 "adrfam": "ipv4", 00:22:28.058 "trsvcid": "4420", 00:22:28.059 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:28.059 "prchk_reftag": false, 00:22:28.059 "prchk_guard": false, 00:22:28.059 "hdgst": false, 00:22:28.059 "ddgst": false, 00:22:28.059 "dhchap_key": "key3", 00:22:28.059 "allow_unrecognized_csi": false, 00:22:28.059 "method": "bdev_nvme_attach_controller", 00:22:28.059 "req_id": 1 00:22:28.059 } 00:22:28.059 Got JSON-RPC error response 00:22:28.059 response: 00:22:28.059 { 00:22:28.059 "code": -5, 00:22:28.059 "message": "Input/output error" 00:22:28.059 } 00:22:28.059 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:28.059 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:28.059 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:28.059 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:28.059 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:28.059 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:28.059 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:28.059 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:28.316 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:28.316 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:28.316 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:28.316 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:28.316 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.316 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:28.316 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.316 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:28.316 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.316 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.574 request: 00:22:28.574 { 00:22:28.574 "name": "nvme0", 00:22:28.574 "trtype": "tcp", 00:22:28.574 "traddr": "10.0.0.2", 00:22:28.574 "adrfam": "ipv4", 00:22:28.574 "trsvcid": "4420", 00:22:28.574 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:28.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:28.575 "prchk_reftag": false, 00:22:28.575 "prchk_guard": false, 00:22:28.575 "hdgst": false, 00:22:28.575 "ddgst": false, 00:22:28.575 "dhchap_key": "key3", 00:22:28.575 "allow_unrecognized_csi": false, 00:22:28.575 "method": "bdev_nvme_attach_controller", 00:22:28.575 "req_id": 1 00:22:28.575 } 00:22:28.575 Got JSON-RPC error response 00:22:28.575 response: 00:22:28.575 { 00:22:28.575 "code": -5, 00:22:28.575 "message": "Input/output error" 00:22:28.575 } 00:22:28.575 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:28.575 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:28.575 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:28.575 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:28.575 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:28.575 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:28.575 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:28.575 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.575 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.575 23:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:28.833 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:29.399 request: 00:22:29.399 { 00:22:29.399 "name": "nvme0", 00:22:29.399 "trtype": "tcp", 00:22:29.399 "traddr": "10.0.0.2", 00:22:29.399 "adrfam": "ipv4", 00:22:29.399 "trsvcid": "4420", 00:22:29.399 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:29.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.399 "prchk_reftag": false, 00:22:29.399 "prchk_guard": false, 00:22:29.399 "hdgst": false, 00:22:29.399 "ddgst": false, 00:22:29.399 "dhchap_key": "key0", 00:22:29.399 "dhchap_ctrlr_key": "key1", 00:22:29.399 "allow_unrecognized_csi": false, 00:22:29.399 "method": "bdev_nvme_attach_controller", 00:22:29.399 "req_id": 1 00:22:29.399 } 00:22:29.399 Got JSON-RPC error response 00:22:29.399 response: 00:22:29.399 { 00:22:29.399 "code": -5, 00:22:29.399 "message": "Input/output error" 00:22:29.399 } 00:22:29.399 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:29.399 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.399 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.399 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.399 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:29.399 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:29.399 23:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:29.965 nvme0n1 00:22:29.965 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:29.965 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:29.965 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.222 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.223 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.223 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.480 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:30.481 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.481 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.481 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.481 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:30.481 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:30.481 23:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:31.948 nvme0n1 00:22:31.948 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:31.948 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:31.948 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.266 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.266 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:32.266 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.266 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.266 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.266 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:32.266 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.266 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:32.524 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.524 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:22:32.524 23:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: --dhchap-ctrl-secret DHHC-1:03:ZWE3NTJjNmNlYjQ2NDZjZmFkMGJkNDU0ODE1NTM4ODg3OGIxZTQ5Zjk2OTdlNjhmMzExOWI2OTU5MDgyZmI2MFcXjdA=: 00:22:33.458 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:33.458 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:33.458 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:33.458 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:33.458 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:33.458 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:33.458 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:33.458 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.458 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.716 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:33.716 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:33.716 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:33.716 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:33.716 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.716 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:33.716 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.716 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:33.716 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:33.716 23:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:34.650 request: 00:22:34.650 { 00:22:34.650 "name": "nvme0", 00:22:34.650 "trtype": "tcp", 00:22:34.650 "traddr": "10.0.0.2", 00:22:34.650 "adrfam": "ipv4", 00:22:34.650 "trsvcid": "4420", 00:22:34.650 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:34.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:34.650 "prchk_reftag": false, 00:22:34.650 "prchk_guard": false, 00:22:34.650 "hdgst": false, 00:22:34.650 "ddgst": false, 00:22:34.650 "dhchap_key": "key1", 00:22:34.650 "allow_unrecognized_csi": false, 00:22:34.650 "method": "bdev_nvme_attach_controller", 00:22:34.650 "req_id": 1 00:22:34.650 } 00:22:34.650 Got JSON-RPC error response 00:22:34.650 response: 00:22:34.650 { 00:22:34.650 "code": -5, 00:22:34.650 "message": "Input/output error" 00:22:34.650 } 00:22:34.650 23:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:34.650 23:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:34.650 23:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:34.650 23:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:34.650 23:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:34.650 23:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:34.650 23:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:36.548 nvme0n1 00:22:36.548 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:36.548 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.548 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:36.548 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.548 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.548 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.806 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.806 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.806 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.806 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.806 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:36.806 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:36.806 23:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:37.064 nvme0n1 00:22:37.064 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:37.064 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:37.064 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.322 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.322 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.322 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: '' 2s 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: ]] 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MGMzMTY0NTY4MDJjMjBkYzM4NGVlMGFmNTIxMzgxMDNW98TV: 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:37.888 23:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: 2s 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: ]] 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDczYTI4ZDg4MGMxYzVkYzQ2ZDM1OTQ3MzAxNzJkY2RiZTQyZWI4MmExOGFhODA0chVKxg==: 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:39.787 23:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:41.686 23:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:41.686 23:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:41.687 23:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:41.687 23:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:41.687 23:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:41.687 23:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:41.687 23:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:41.687 23:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.945 23:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:41.945 23:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.945 23:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.945 23:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.945 23:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:41.945 23:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:41.945 23:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:43.317 nvme0n1 00:22:43.317 23:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:43.317 23:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.317 23:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.317 23:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.317 23:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:43.317 23:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:44.249 23:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:44.249 23:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:44.249 23:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.507 23:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.507 23:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.507 23:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.507 23:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.507 23:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.507 23:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:44.507 23:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:44.764 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:44.764 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.764 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:45.022 23:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:45.955 request: 00:22:45.955 { 00:22:45.955 "name": "nvme0", 00:22:45.955 "dhchap_key": "key1", 00:22:45.955 "dhchap_ctrlr_key": "key3", 00:22:45.955 "method": "bdev_nvme_set_keys", 00:22:45.955 "req_id": 1 00:22:45.955 } 00:22:45.955 Got JSON-RPC error response 00:22:45.955 response: 00:22:45.955 { 00:22:45.955 "code": -13, 00:22:45.955 "message": "Permission denied" 00:22:45.955 } 00:22:45.955 23:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:45.955 23:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.955 23:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.955 23:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.955 23:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:45.955 23:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.955 23:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:46.521 23:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:46.521 23:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:47.454 23:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:47.454 23:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:47.454 23:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.712 23:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:47.712 23:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:47.712 23:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.712 23:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.712 23:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.712 23:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:47.712 23:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:47.712 23:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:49.086 nvme0n1 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:49.086 23:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:50.019 request: 00:22:50.019 { 00:22:50.019 "name": "nvme0", 00:22:50.019 "dhchap_key": "key2", 00:22:50.019 "dhchap_ctrlr_key": "key0", 00:22:50.019 "method": "bdev_nvme_set_keys", 00:22:50.019 "req_id": 1 00:22:50.019 } 00:22:50.019 Got JSON-RPC error response 00:22:50.019 response: 00:22:50.019 { 00:22:50.019 "code": -13, 00:22:50.019 "message": "Permission denied" 00:22:50.019 } 00:22:50.019 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:50.019 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:50.019 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:50.019 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:50.019 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:50.019 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.019 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:50.587 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:50.587 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:51.522 23:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:51.522 23:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:51.522 23:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.781 23:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:51.781 23:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:52.715 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:52.715 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:52.715 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 179139 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 179139 ']' 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 179139 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 179139 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 179139' 00:22:52.973 killing process with pid 179139 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 179139 00:22:52.973 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 179139 00:22:53.540 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:53.540 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:53.540 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:53.540 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:53.540 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:53.541 rmmod nvme_tcp 00:22:53.541 rmmod nvme_fabrics 00:22:53.541 rmmod nvme_keyring 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 202463 ']' 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 202463 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 202463 ']' 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 202463 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 202463 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 202463' 00:22:53.541 killing process with pid 202463 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 202463 00:22:53.541 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 202463 00:22:53.800 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:53.800 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:53.800 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:53.800 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:53.800 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:53.800 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:53.800 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:53.800 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:53.800 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:53.800 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.800 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.800 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.703 23:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:55.703 23:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Icn /tmp/spdk.key-sha256.Pex /tmp/spdk.key-sha384.a1q /tmp/spdk.key-sha512.r19 /tmp/spdk.key-sha512.dXC /tmp/spdk.key-sha384.RmZ /tmp/spdk.key-sha256.uGc '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:55.703 00:22:55.703 real 3m40.970s 00:22:55.703 user 8m36.338s 00:22:55.703 sys 0m27.271s 00:22:55.703 23:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:55.703 23:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.703 ************************************ 00:22:55.703 END TEST nvmf_auth_target 00:22:55.703 ************************************ 00:22:55.703 23:47:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:55.703 23:47:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:55.703 23:47:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:55.703 23:47:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.703 23:47:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:55.703 ************************************ 00:22:55.703 START TEST nvmf_bdevio_no_huge 00:22:55.703 ************************************ 00:22:55.703 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:55.962 * Looking for test storage... 00:22:55.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.962 --rc genhtml_branch_coverage=1 00:22:55.962 --rc genhtml_function_coverage=1 00:22:55.962 --rc genhtml_legend=1 00:22:55.962 --rc geninfo_all_blocks=1 00:22:55.962 --rc geninfo_unexecuted_blocks=1 00:22:55.962 00:22:55.962 ' 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.962 --rc genhtml_branch_coverage=1 00:22:55.962 --rc genhtml_function_coverage=1 00:22:55.962 --rc genhtml_legend=1 00:22:55.962 --rc geninfo_all_blocks=1 00:22:55.962 --rc geninfo_unexecuted_blocks=1 00:22:55.962 00:22:55.962 ' 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.962 --rc genhtml_branch_coverage=1 00:22:55.962 --rc genhtml_function_coverage=1 00:22:55.962 --rc genhtml_legend=1 00:22:55.962 --rc geninfo_all_blocks=1 00:22:55.962 --rc geninfo_unexecuted_blocks=1 00:22:55.962 00:22:55.962 ' 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:55.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.962 --rc genhtml_branch_coverage=1 00:22:55.962 --rc genhtml_function_coverage=1 00:22:55.962 --rc genhtml_legend=1 00:22:55.962 --rc geninfo_all_blocks=1 00:22:55.962 --rc geninfo_unexecuted_blocks=1 00:22:55.962 00:22:55.962 ' 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.962 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:55.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:55.963 23:47:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:57.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.867 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:57.868 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:57.868 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:57.868 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.868 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:58.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:22:58.128 00:22:58.128 --- 10.0.0.2 ping statistics --- 00:22:58.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.128 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:22:58.128 00:22:58.128 --- 10.0.0.1 ping statistics --- 00:22:58.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.128 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=208007 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 208007 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 208007 ']' 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.128 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.386 [2024-11-19 23:47:32.447707] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:22:58.387 [2024-11-19 23:47:32.447811] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:58.387 [2024-11-19 23:47:32.528950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:58.387 [2024-11-19 23:47:32.580753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.387 [2024-11-19 23:47:32.580825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.387 [2024-11-19 23:47:32.580848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.387 [2024-11-19 23:47:32.580862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.387 [2024-11-19 23:47:32.580874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.387 [2024-11-19 23:47:32.582010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:58.387 [2024-11-19 23:47:32.582137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:58.387 [2024-11-19 23:47:32.582141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.387 [2024-11-19 23:47:32.582083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.645 [2024-11-19 23:47:32.740689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.645 Malloc0 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.645 [2024-11-19 23:47:32.779081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.645 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.646 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:58.646 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:58.646 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:58.646 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:58.646 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.646 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.646 { 00:22:58.646 "params": { 00:22:58.646 "name": "Nvme$subsystem", 00:22:58.646 "trtype": "$TEST_TRANSPORT", 00:22:58.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.646 "adrfam": "ipv4", 00:22:58.646 "trsvcid": "$NVMF_PORT", 00:22:58.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.646 "hdgst": ${hdgst:-false}, 00:22:58.646 "ddgst": ${ddgst:-false} 00:22:58.646 }, 00:22:58.646 "method": "bdev_nvme_attach_controller" 00:22:58.646 } 00:22:58.646 EOF 00:22:58.646 )") 00:22:58.646 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:58.646 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:58.646 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:58.646 23:47:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:58.646 "params": { 00:22:58.646 "name": "Nvme1", 00:22:58.646 "trtype": "tcp", 00:22:58.646 "traddr": "10.0.0.2", 00:22:58.646 "adrfam": "ipv4", 00:22:58.646 "trsvcid": "4420", 00:22:58.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.646 "hdgst": false, 00:22:58.646 "ddgst": false 00:22:58.646 }, 00:22:58.646 "method": "bdev_nvme_attach_controller" 00:22:58.646 }' 00:22:58.646 [2024-11-19 23:47:32.828820] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:22:58.646 [2024-11-19 23:47:32.828911] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid208039 ] 00:22:58.646 [2024-11-19 23:47:32.898974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:58.646 [2024-11-19 23:47:32.949334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.646 [2024-11-19 23:47:32.949384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.646 [2024-11-19 23:47:32.949388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.211 I/O targets: 00:22:59.211 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:59.211 00:22:59.211 00:22:59.211 CUnit - A unit testing framework for C - Version 2.1-3 00:22:59.211 http://cunit.sourceforge.net/ 00:22:59.211 00:22:59.211 00:22:59.211 Suite: bdevio tests on: Nvme1n1 00:22:59.211 Test: blockdev write read block ...passed 00:22:59.211 Test: blockdev write zeroes read block ...passed 00:22:59.211 Test: blockdev write zeroes read no split ...passed 00:22:59.211 Test: blockdev write zeroes read split ...passed 00:22:59.211 Test: blockdev write zeroes read split partial ...passed 00:22:59.211 Test: blockdev reset ...[2024-11-19 23:47:33.417581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:59.211 [2024-11-19 23:47:33.417703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc896a0 (9): Bad file descriptor 00:22:59.211 [2024-11-19 23:47:33.433193] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:59.211 passed 00:22:59.211 Test: blockdev write read 8 blocks ...passed 00:22:59.211 Test: blockdev write read size > 128k ...passed 00:22:59.211 Test: blockdev write read invalid size ...passed 00:22:59.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:59.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:59.211 Test: blockdev write read max offset ...passed 00:22:59.470 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:59.470 Test: blockdev writev readv 8 blocks ...passed 00:22:59.470 Test: blockdev writev readv 30 x 1block ...passed 00:22:59.470 Test: blockdev writev readv block ...passed 00:22:59.470 Test: blockdev writev readv size > 128k ...passed 00:22:59.470 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:59.470 Test: blockdev comparev and writev ...[2024-11-19 23:47:33.603400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.470 [2024-11-19 23:47:33.603439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.470 [2024-11-19 23:47:33.603464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.470 [2024-11-19 23:47:33.603482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.470 [2024-11-19 23:47:33.603812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.470 [2024-11-19 23:47:33.603838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.470 [2024-11-19 23:47:33.603861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.470 [2024-11-19 23:47:33.603878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.470 [2024-11-19 23:47:33.604189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.470 [2024-11-19 23:47:33.604215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.470 [2024-11-19 23:47:33.604237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.470 [2024-11-19 23:47:33.604254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.470 [2024-11-19 23:47:33.604572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.470 [2024-11-19 23:47:33.604597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.470 [2024-11-19 23:47:33.604618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.470 [2024-11-19 23:47:33.604650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.470 passed 00:22:59.470 Test: blockdev nvme passthru rw ...passed 00:22:59.470 Test: blockdev nvme passthru vendor specific ...[2024-11-19 23:47:33.687315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:59.470 [2024-11-19 23:47:33.687343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.470 [2024-11-19 23:47:33.687512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:59.470 [2024-11-19 23:47:33.687534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.470 [2024-11-19 23:47:33.687691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:59.470 [2024-11-19 23:47:33.687722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.470 [2024-11-19 23:47:33.687862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:59.470 [2024-11-19 23:47:33.687886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.470 passed 00:22:59.470 Test: blockdev nvme admin passthru ...passed 00:22:59.470 Test: blockdev copy ...passed 00:22:59.470 00:22:59.470 Run Summary: Type Total Ran Passed Failed Inactive 00:22:59.470 suites 1 1 n/a 0 0 00:22:59.470 tests 23 23 23 0 0 00:22:59.470 asserts 152 152 152 0 n/a 00:22:59.470 00:22:59.470 Elapsed time = 0.973 seconds 00:23:00.073 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:00.073 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.073 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:00.073 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:00.074 rmmod nvme_tcp 00:23:00.074 rmmod nvme_fabrics 00:23:00.074 rmmod nvme_keyring 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 208007 ']' 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 208007 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 208007 ']' 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 208007 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 208007 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 208007' 00:23:00.074 killing process with pid 208007 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 208007 00:23:00.074 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 208007 00:23:00.363 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:00.363 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:00.363 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:00.363 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:00.363 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:00.363 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:00.363 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:00.363 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:00.363 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:00.363 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.363 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.363 23:47:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.895 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:02.895 00:23:02.895 real 0m6.589s 00:23:02.895 user 0m10.516s 00:23:02.895 sys 0m2.550s 00:23:02.895 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:02.896 ************************************ 00:23:02.896 END TEST nvmf_bdevio_no_huge 00:23:02.896 ************************************ 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:02.896 ************************************ 00:23:02.896 START TEST nvmf_tls 00:23:02.896 ************************************ 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:02.896 * Looking for test storage... 00:23:02.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:02.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.896 --rc genhtml_branch_coverage=1 00:23:02.896 --rc genhtml_function_coverage=1 00:23:02.896 --rc genhtml_legend=1 00:23:02.896 --rc geninfo_all_blocks=1 00:23:02.896 --rc geninfo_unexecuted_blocks=1 00:23:02.896 00:23:02.896 ' 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:02.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.896 --rc genhtml_branch_coverage=1 00:23:02.896 --rc genhtml_function_coverage=1 00:23:02.896 --rc genhtml_legend=1 00:23:02.896 --rc geninfo_all_blocks=1 00:23:02.896 --rc geninfo_unexecuted_blocks=1 00:23:02.896 00:23:02.896 ' 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:02.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.896 --rc genhtml_branch_coverage=1 00:23:02.896 --rc genhtml_function_coverage=1 00:23:02.896 --rc genhtml_legend=1 00:23:02.896 --rc geninfo_all_blocks=1 00:23:02.896 --rc geninfo_unexecuted_blocks=1 00:23:02.896 00:23:02.896 ' 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:02.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.896 --rc genhtml_branch_coverage=1 00:23:02.896 --rc genhtml_function_coverage=1 00:23:02.896 --rc genhtml_legend=1 00:23:02.896 --rc geninfo_all_blocks=1 00:23:02.896 --rc geninfo_unexecuted_blocks=1 00:23:02.896 00:23:02.896 ' 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:02.896 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:02.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:02.897 23:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:04.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:04.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:04.801 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:04.802 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:04.802 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:04.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:23:04.802 00:23:04.802 --- 10.0.0.2 ping statistics --- 00:23:04.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.802 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:23:04.802 00:23:04.802 --- 10.0.0.1 ping statistics --- 00:23:04.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.802 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:04.802 23:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=210118 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 210118 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 210118 ']' 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.802 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.802 [2024-11-19 23:47:39.076571] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:04.802 [2024-11-19 23:47:39.076660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.060 [2024-11-19 23:47:39.160610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.060 [2024-11-19 23:47:39.207061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.060 [2024-11-19 23:47:39.207154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.060 [2024-11-19 23:47:39.207170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.060 [2024-11-19 23:47:39.207182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.060 [2024-11-19 23:47:39.207193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.060 [2024-11-19 23:47:39.207837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.060 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.060 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:05.060 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:05.060 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.060 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.060 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.060 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:05.060 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:05.624 true 00:23:05.624 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:05.624 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:05.882 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:05.882 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:05.882 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:06.141 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:06.141 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:06.399 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:06.399 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:06.399 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:06.657 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:06.657 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:06.914 23:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:06.914 23:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:06.914 23:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:06.914 23:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:07.173 23:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:07.173 23:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:07.173 23:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:07.431 23:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:07.431 23:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:07.689 23:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:07.689 23:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:07.689 23:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:07.947 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:07.947 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:08.205 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:08.205 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:08.205 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:08.205 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:08.205 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:08.205 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:08.205 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:08.205 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.6GhmIg1C8C 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.qhXyBkWne4 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6GhmIg1C8C 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.qhXyBkWne4 00:23:08.464 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:08.722 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:08.980 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.6GhmIg1C8C 00:23:08.980 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6GhmIg1C8C 00:23:08.980 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:09.238 [2024-11-19 23:47:43.518891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.238 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:09.805 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:09.805 [2024-11-19 23:47:44.056353] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:09.805 [2024-11-19 23:47:44.056643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.805 23:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:10.062 malloc0 00:23:10.062 23:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:10.320 23:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6GhmIg1C8C 00:23:10.887 23:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:10.887 23:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.6GhmIg1C8C 00:23:23.096 Initializing NVMe Controllers 00:23:23.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:23.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:23.096 Initialization complete. Launching workers. 00:23:23.096 ======================================================== 00:23:23.096 Latency(us) 00:23:23.096 Device Information : IOPS MiB/s Average min max 00:23:23.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7944.79 31.03 8058.28 1118.43 9478.41 00:23:23.096 ======================================================== 00:23:23.097 Total : 7944.79 31.03 8058.28 1118.43 9478.41 00:23:23.097 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6GhmIg1C8C 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6GhmIg1C8C 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=212148 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 212148 /var/tmp/bdevperf.sock 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 212148 ']' 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.097 [2024-11-19 23:47:55.348097] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:23.097 [2024-11-19 23:47:55.348180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid212148 ] 00:23:23.097 [2024-11-19 23:47:55.415762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.097 [2024-11-19 23:47:55.467013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6GhmIg1C8C 00:23:23.097 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:23.097 [2024-11-19 23:47:56.122198] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.097 TLSTESTn1 00:23:23.097 23:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:23.097 Running I/O for 10 seconds... 00:23:24.030 3103.00 IOPS, 12.12 MiB/s [2024-11-19T22:47:59.716Z] 3201.00 IOPS, 12.50 MiB/s [2024-11-19T22:48:00.650Z] 3231.67 IOPS, 12.62 MiB/s [2024-11-19T22:48:01.582Z] 3239.00 IOPS, 12.65 MiB/s [2024-11-19T22:48:02.514Z] 3244.00 IOPS, 12.67 MiB/s [2024-11-19T22:48:03.445Z] 3233.67 IOPS, 12.63 MiB/s [2024-11-19T22:48:04.376Z] 3231.29 IOPS, 12.62 MiB/s [2024-11-19T22:48:05.750Z] 3234.25 IOPS, 12.63 MiB/s [2024-11-19T22:48:06.685Z] 3243.89 IOPS, 12.67 MiB/s [2024-11-19T22:48:06.685Z] 3249.80 IOPS, 12.69 MiB/s 00:23:32.373 Latency(us) 00:23:32.373 [2024-11-19T22:48:06.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.373 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:32.373 Verification LBA range: start 0x0 length 0x2000 00:23:32.373 TLSTESTn1 : 10.02 3256.75 12.72 0.00 0.00 39238.68 6505.05 37088.52 00:23:32.373 [2024-11-19T22:48:06.685Z] =================================================================================================================== 00:23:32.373 [2024-11-19T22:48:06.685Z] Total : 3256.75 12.72 0.00 0.00 39238.68 6505.05 37088.52 00:23:32.373 { 00:23:32.373 "results": [ 00:23:32.373 { 00:23:32.373 "job": "TLSTESTn1", 00:23:32.373 "core_mask": "0x4", 00:23:32.373 "workload": "verify", 00:23:32.373 "status": "finished", 00:23:32.373 "verify_range": { 00:23:32.373 "start": 0, 00:23:32.373 "length": 8192 00:23:32.373 }, 00:23:32.373 "queue_depth": 128, 00:23:32.373 "io_size": 4096, 00:23:32.373 "runtime": 10.017028, 00:23:32.373 "iops": 3256.754398610047, 00:23:32.373 "mibps": 12.721696869570495, 00:23:32.373 "io_failed": 0, 00:23:32.373 "io_timeout": 0, 00:23:32.373 "avg_latency_us": 39238.68122853565, 00:23:32.373 "min_latency_us": 6505.054814814815, 00:23:32.373 "max_latency_us": 37088.52148148148 00:23:32.373 } 00:23:32.373 ], 00:23:32.373 "core_count": 1 00:23:32.373 } 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 212148 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 212148 ']' 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 212148 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 212148 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 212148' 00:23:32.373 killing process with pid 212148 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 212148 00:23:32.373 Received shutdown signal, test time was about 10.000000 seconds 00:23:32.373 00:23:32.373 Latency(us) 00:23:32.373 [2024-11-19T22:48:06.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.373 [2024-11-19T22:48:06.685Z] =================================================================================================================== 00:23:32.373 [2024-11-19T22:48:06.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 212148 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qhXyBkWne4 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qhXyBkWne4 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qhXyBkWne4 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:32.373 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qhXyBkWne4 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=213491 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 213491 /var/tmp/bdevperf.sock 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 213491 ']' 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.374 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.374 [2024-11-19 23:48:06.661951] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:32.374 [2024-11-19 23:48:06.662032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid213491 ] 00:23:32.632 [2024-11-19 23:48:06.730654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.632 [2024-11-19 23:48:06.775570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.632 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.632 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:32.632 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qhXyBkWne4 00:23:32.890 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:33.456 [2024-11-19 23:48:07.462032] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.457 [2024-11-19 23:48:07.469638] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:33.457 [2024-11-19 23:48:07.470276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c25370 (107): Transport endpoint is not connected 00:23:33.457 [2024-11-19 23:48:07.471266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c25370 (9): Bad file descriptor 00:23:33.457 [2024-11-19 23:48:07.472265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:33.457 [2024-11-19 23:48:07.472285] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:33.457 [2024-11-19 23:48:07.472299] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:33.457 [2024-11-19 23:48:07.472319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:33.457 request: 00:23:33.457 { 00:23:33.457 "name": "TLSTEST", 00:23:33.457 "trtype": "tcp", 00:23:33.457 "traddr": "10.0.0.2", 00:23:33.457 "adrfam": "ipv4", 00:23:33.457 "trsvcid": "4420", 00:23:33.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.457 "prchk_reftag": false, 00:23:33.457 "prchk_guard": false, 00:23:33.457 "hdgst": false, 00:23:33.457 "ddgst": false, 00:23:33.457 "psk": "key0", 00:23:33.457 "allow_unrecognized_csi": false, 00:23:33.457 "method": "bdev_nvme_attach_controller", 00:23:33.457 "req_id": 1 00:23:33.457 } 00:23:33.457 Got JSON-RPC error response 00:23:33.457 response: 00:23:33.457 { 00:23:33.457 "code": -5, 00:23:33.457 "message": "Input/output error" 00:23:33.457 } 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 213491 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 213491 ']' 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 213491 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 213491 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 213491' 00:23:33.457 killing process with pid 213491 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 213491 00:23:33.457 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.457 00:23:33.457 Latency(us) 00:23:33.457 [2024-11-19T22:48:07.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.457 [2024-11-19T22:48:07.769Z] =================================================================================================================== 00:23:33.457 [2024-11-19T22:48:07.769Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 213491 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6GhmIg1C8C 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6GhmIg1C8C 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6GhmIg1C8C 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6GhmIg1C8C 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=213857 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 213857 /var/tmp/bdevperf.sock 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 213857 ']' 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.457 23:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.457 [2024-11-19 23:48:07.761463] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:33.457 [2024-11-19 23:48:07.761547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid213857 ] 00:23:33.716 [2024-11-19 23:48:07.835228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.716 [2024-11-19 23:48:07.884631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.716 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.716 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.716 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6GhmIg1C8C 00:23:34.281 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:34.281 [2024-11-19 23:48:08.562618] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.281 [2024-11-19 23:48:08.572427] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:34.282 [2024-11-19 23:48:08.572457] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:34.282 [2024-11-19 23:48:08.572506] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:34.282 [2024-11-19 23:48:08.572876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82f370 (107): Transport endpoint is not connected 00:23:34.282 [2024-11-19 23:48:08.573867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82f370 (9): Bad file descriptor 00:23:34.282 [2024-11-19 23:48:08.574866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:34.282 [2024-11-19 23:48:08.574887] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:34.282 [2024-11-19 23:48:08.574916] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:34.282 [2024-11-19 23:48:08.574935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:34.282 request: 00:23:34.282 { 00:23:34.282 "name": "TLSTEST", 00:23:34.282 "trtype": "tcp", 00:23:34.282 "traddr": "10.0.0.2", 00:23:34.282 "adrfam": "ipv4", 00:23:34.282 "trsvcid": "4420", 00:23:34.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.282 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:34.282 "prchk_reftag": false, 00:23:34.282 "prchk_guard": false, 00:23:34.282 "hdgst": false, 00:23:34.282 "ddgst": false, 00:23:34.282 "psk": "key0", 00:23:34.282 "allow_unrecognized_csi": false, 00:23:34.282 "method": "bdev_nvme_attach_controller", 00:23:34.282 "req_id": 1 00:23:34.282 } 00:23:34.282 Got JSON-RPC error response 00:23:34.282 response: 00:23:34.282 { 00:23:34.282 "code": -5, 00:23:34.282 "message": "Input/output error" 00:23:34.282 } 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 213857 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 213857 ']' 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 213857 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 213857 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 213857' 00:23:34.541 killing process with pid 213857 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 213857 00:23:34.541 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.541 00:23:34.541 Latency(us) 00:23:34.541 [2024-11-19T22:48:08.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.541 [2024-11-19T22:48:08.853Z] =================================================================================================================== 00:23:34.541 [2024-11-19T22:48:08.853Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 213857 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6GhmIg1C8C 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6GhmIg1C8C 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6GhmIg1C8C 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6GhmIg1C8C 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=214240 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 214240 /var/tmp/bdevperf.sock 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 214240 ']' 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.541 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.799 [2024-11-19 23:48:08.852365] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:34.799 [2024-11-19 23:48:08.852472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid214240 ] 00:23:34.799 [2024-11-19 23:48:08.921632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.799 [2024-11-19 23:48:08.969015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.799 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.799 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.799 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6GhmIg1C8C 00:23:35.364 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.364 [2024-11-19 23:48:09.622341] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.364 [2024-11-19 23:48:09.627692] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:35.364 [2024-11-19 23:48:09.627724] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:35.364 [2024-11-19 23:48:09.627761] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:35.364 [2024-11-19 23:48:09.628297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd5370 (107): Transport endpoint is not connected 00:23:35.364 [2024-11-19 23:48:09.629285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd5370 (9): Bad file descriptor 00:23:35.364 [2024-11-19 23:48:09.630284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:35.364 [2024-11-19 23:48:09.630305] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:35.364 [2024-11-19 23:48:09.630319] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:35.364 [2024-11-19 23:48:09.630338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:35.364 request: 00:23:35.364 { 00:23:35.364 "name": "TLSTEST", 00:23:35.364 "trtype": "tcp", 00:23:35.364 "traddr": "10.0.0.2", 00:23:35.364 "adrfam": "ipv4", 00:23:35.364 "trsvcid": "4420", 00:23:35.364 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:35.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.364 "prchk_reftag": false, 00:23:35.364 "prchk_guard": false, 00:23:35.364 "hdgst": false, 00:23:35.364 "ddgst": false, 00:23:35.364 "psk": "key0", 00:23:35.364 "allow_unrecognized_csi": false, 00:23:35.364 "method": "bdev_nvme_attach_controller", 00:23:35.364 "req_id": 1 00:23:35.364 } 00:23:35.364 Got JSON-RPC error response 00:23:35.364 response: 00:23:35.364 { 00:23:35.364 "code": -5, 00:23:35.364 "message": "Input/output error" 00:23:35.364 } 00:23:35.364 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 214240 00:23:35.364 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 214240 ']' 00:23:35.364 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 214240 00:23:35.364 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:35.364 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.364 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 214240 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 214240' 00:23:35.622 killing process with pid 214240 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 214240 00:23:35.622 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.622 00:23:35.622 Latency(us) 00:23:35.622 [2024-11-19T22:48:09.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.622 [2024-11-19T22:48:09.934Z] =================================================================================================================== 00:23:35.622 [2024-11-19T22:48:09.934Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 214240 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=214387 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 214387 /var/tmp/bdevperf.sock 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 214387 ']' 00:23:35.622 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.623 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.623 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.623 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.623 23:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.623 [2024-11-19 23:48:09.930493] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:35.623 [2024-11-19 23:48:09.930573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid214387 ] 00:23:35.881 [2024-11-19 23:48:09.997990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.881 [2024-11-19 23:48:10.048836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.881 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.881 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:35.881 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:36.138 [2024-11-19 23:48:10.429886] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:36.138 [2024-11-19 23:48:10.429925] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:36.138 request: 00:23:36.138 { 00:23:36.138 "name": "key0", 00:23:36.138 "path": "", 00:23:36.138 "method": "keyring_file_add_key", 00:23:36.138 "req_id": 1 00:23:36.138 } 00:23:36.138 Got JSON-RPC error response 00:23:36.138 response: 00:23:36.138 { 00:23:36.138 "code": -1, 00:23:36.138 "message": "Operation not permitted" 00:23:36.138 } 00:23:36.396 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.654 [2024-11-19 23:48:10.710807] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.654 [2024-11-19 23:48:10.710875] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:36.654 request: 00:23:36.654 { 00:23:36.654 "name": "TLSTEST", 00:23:36.654 "trtype": "tcp", 00:23:36.654 "traddr": "10.0.0.2", 00:23:36.654 "adrfam": "ipv4", 00:23:36.654 "trsvcid": "4420", 00:23:36.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.654 "prchk_reftag": false, 00:23:36.654 "prchk_guard": false, 00:23:36.654 "hdgst": false, 00:23:36.654 "ddgst": false, 00:23:36.654 "psk": "key0", 00:23:36.654 "allow_unrecognized_csi": false, 00:23:36.654 "method": "bdev_nvme_attach_controller", 00:23:36.654 "req_id": 1 00:23:36.654 } 00:23:36.654 Got JSON-RPC error response 00:23:36.654 response: 00:23:36.654 { 00:23:36.654 "code": -126, 00:23:36.654 "message": "Required key not available" 00:23:36.654 } 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 214387 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 214387 ']' 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 214387 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 214387 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 214387' 00:23:36.654 killing process with pid 214387 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 214387 00:23:36.654 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.654 00:23:36.654 Latency(us) 00:23:36.654 [2024-11-19T22:48:10.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.654 [2024-11-19T22:48:10.966Z] =================================================================================================================== 00:23:36.654 [2024-11-19T22:48:10.966Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 214387 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 210118 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 210118 ']' 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 210118 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.654 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 210118 00:23:36.913 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:36.913 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:36.913 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 210118' 00:23:36.913 killing process with pid 210118 00:23:36.913 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 210118 00:23:36.913 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 210118 00:23:36.913 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:36.913 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:36.913 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:36.913 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:36.913 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:36.913 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:36.913 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:37.171 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:37.171 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:37.171 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.EtYodpPzOo 00:23:37.171 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:37.171 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.EtYodpPzOo 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=214659 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 214659 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 214659 ']' 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.172 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.172 [2024-11-19 23:48:11.289702] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:37.172 [2024-11-19 23:48:11.289792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.172 [2024-11-19 23:48:11.367701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.172 [2024-11-19 23:48:11.414401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.172 [2024-11-19 23:48:11.414469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.172 [2024-11-19 23:48:11.414485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.172 [2024-11-19 23:48:11.414499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.172 [2024-11-19 23:48:11.414510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.172 [2024-11-19 23:48:11.415153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.429 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.429 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.430 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.430 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.430 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.430 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.430 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.EtYodpPzOo 00:23:37.430 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EtYodpPzOo 00:23:37.430 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.687 [2024-11-19 23:48:11.797443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.687 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:37.944 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:38.202 [2024-11-19 23:48:12.330918] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.202 [2024-11-19 23:48:12.331223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.202 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:38.460 malloc0 00:23:38.460 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:38.717 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EtYodpPzOo 00:23:38.975 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EtYodpPzOo 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EtYodpPzOo 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=214865 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 214865 /var/tmp/bdevperf.sock 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 214865 ']' 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.233 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.233 [2024-11-19 23:48:13.448801] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:39.233 [2024-11-19 23:48:13.448907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid214865 ] 00:23:39.233 [2024-11-19 23:48:13.519377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.491 [2024-11-19 23:48:13.565310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.491 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.491 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:39.491 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EtYodpPzOo 00:23:39.749 23:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:40.007 [2024-11-19 23:48:14.183640] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.007 TLSTESTn1 00:23:40.007 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:40.263 Running I/O for 10 seconds... 00:23:42.208 3467.00 IOPS, 13.54 MiB/s [2024-11-19T22:48:17.453Z] 3457.00 IOPS, 13.50 MiB/s [2024-11-19T22:48:18.387Z] 3464.67 IOPS, 13.53 MiB/s [2024-11-19T22:48:19.760Z] 3370.00 IOPS, 13.16 MiB/s [2024-11-19T22:48:20.694Z] 3392.40 IOPS, 13.25 MiB/s [2024-11-19T22:48:21.627Z] 3382.00 IOPS, 13.21 MiB/s [2024-11-19T22:48:22.561Z] 3397.86 IOPS, 13.27 MiB/s [2024-11-19T22:48:23.494Z] 3408.62 IOPS, 13.31 MiB/s [2024-11-19T22:48:24.426Z] 3417.22 IOPS, 13.35 MiB/s [2024-11-19T22:48:24.426Z] 3437.10 IOPS, 13.43 MiB/s 00:23:50.114 Latency(us) 00:23:50.114 [2024-11-19T22:48:24.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.114 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:50.114 Verification LBA range: start 0x0 length 0x2000 00:23:50.114 TLSTESTn1 : 10.02 3443.63 13.45 0.00 0.00 37110.78 7136.14 50098.63 00:23:50.114 [2024-11-19T22:48:24.426Z] =================================================================================================================== 00:23:50.114 [2024-11-19T22:48:24.426Z] Total : 3443.63 13.45 0.00 0.00 37110.78 7136.14 50098.63 00:23:50.114 { 00:23:50.114 "results": [ 00:23:50.114 { 00:23:50.114 "job": "TLSTESTn1", 00:23:50.114 "core_mask": "0x4", 00:23:50.114 "workload": "verify", 00:23:50.114 "status": "finished", 00:23:50.114 "verify_range": { 00:23:50.114 "start": 0, 00:23:50.114 "length": 8192 00:23:50.114 }, 00:23:50.114 "queue_depth": 128, 00:23:50.114 "io_size": 4096, 00:23:50.114 "runtime": 10.017917, 00:23:50.114 "iops": 3443.6300480429213, 00:23:50.114 "mibps": 13.451679875167661, 00:23:50.114 "io_failed": 0, 00:23:50.114 "io_timeout": 0, 00:23:50.114 "avg_latency_us": 37110.78390545453, 00:23:50.114 "min_latency_us": 7136.142222222222, 00:23:50.114 "max_latency_us": 50098.63111111111 00:23:50.114 } 00:23:50.114 ], 00:23:50.114 "core_count": 1 00:23:50.114 } 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 214865 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 214865 ']' 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 214865 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 214865 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 214865' 00:23:50.374 killing process with pid 214865 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 214865 00:23:50.374 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.374 00:23:50.374 Latency(us) 00:23:50.374 [2024-11-19T22:48:24.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.374 [2024-11-19T22:48:24.686Z] =================================================================================================================== 00:23:50.374 [2024-11-19T22:48:24.686Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 214865 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.EtYodpPzOo 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EtYodpPzOo 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EtYodpPzOo 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EtYodpPzOo 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EtYodpPzOo 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=216153 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 216153 /var/tmp/bdevperf.sock 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 216153 ']' 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.374 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.632 [2024-11-19 23:48:24.726271] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:50.632 [2024-11-19 23:48:24.726352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid216153 ] 00:23:50.632 [2024-11-19 23:48:24.792742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.632 [2024-11-19 23:48:24.838647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.890 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.890 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:50.890 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EtYodpPzOo 00:23:50.890 [2024-11-19 23:48:25.199743] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.EtYodpPzOo': 0100666 00:23:50.890 [2024-11-19 23:48:25.199791] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:51.148 request: 00:23:51.148 { 00:23:51.148 "name": "key0", 00:23:51.148 "path": "/tmp/tmp.EtYodpPzOo", 00:23:51.148 "method": "keyring_file_add_key", 00:23:51.148 "req_id": 1 00:23:51.148 } 00:23:51.148 Got JSON-RPC error response 00:23:51.148 response: 00:23:51.148 { 00:23:51.148 "code": -1, 00:23:51.148 "message": "Operation not permitted" 00:23:51.148 } 00:23:51.148 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.406 [2024-11-19 23:48:25.472590] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.406 [2024-11-19 23:48:25.472658] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:51.406 request: 00:23:51.406 { 00:23:51.406 "name": "TLSTEST", 00:23:51.406 "trtype": "tcp", 00:23:51.406 "traddr": "10.0.0.2", 00:23:51.406 "adrfam": "ipv4", 00:23:51.406 "trsvcid": "4420", 00:23:51.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.406 "prchk_reftag": false, 00:23:51.406 "prchk_guard": false, 00:23:51.406 "hdgst": false, 00:23:51.406 "ddgst": false, 00:23:51.406 "psk": "key0", 00:23:51.406 "allow_unrecognized_csi": false, 00:23:51.406 "method": "bdev_nvme_attach_controller", 00:23:51.406 "req_id": 1 00:23:51.406 } 00:23:51.406 Got JSON-RPC error response 00:23:51.406 response: 00:23:51.406 { 00:23:51.406 "code": -126, 00:23:51.406 "message": "Required key not available" 00:23:51.406 } 00:23:51.406 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 216153 00:23:51.406 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 216153 ']' 00:23:51.406 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 216153 00:23:51.406 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:51.406 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.406 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216153 00:23:51.406 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:51.406 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:51.406 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216153' 00:23:51.406 killing process with pid 216153 00:23:51.406 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 216153 00:23:51.406 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.406 00:23:51.406 Latency(us) 00:23:51.406 [2024-11-19T22:48:25.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.406 [2024-11-19T22:48:25.718Z] =================================================================================================================== 00:23:51.406 [2024-11-19T22:48:25.718Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:51.406 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 216153 00:23:51.664 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 214659 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 214659 ']' 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 214659 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 214659 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 214659' 00:23:51.665 killing process with pid 214659 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 214659 00:23:51.665 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 214659 00:23:51.923 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:51.923 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.923 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.923 23:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.923 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=216296 00:23:51.923 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:51.923 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 216296 00:23:51.923 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 216296 ']' 00:23:51.923 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.923 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.923 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.923 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.923 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.923 [2024-11-19 23:48:26.056261] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:51.923 [2024-11-19 23:48:26.056358] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.923 [2024-11-19 23:48:26.139387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.923 [2024-11-19 23:48:26.186462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.923 [2024-11-19 23:48:26.186532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.923 [2024-11-19 23:48:26.186548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.923 [2024-11-19 23:48:26.186561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.923 [2024-11-19 23:48:26.186573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.923 [2024-11-19 23:48:26.187234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.EtYodpPzOo 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.EtYodpPzOo 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.EtYodpPzOo 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EtYodpPzOo 00:23:52.181 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:52.440 [2024-11-19 23:48:26.592191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.440 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:52.698 23:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:52.955 [2024-11-19 23:48:27.113524] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.955 [2024-11-19 23:48:27.113788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.955 23:48:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:53.213 malloc0 00:23:53.213 23:48:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:53.471 23:48:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EtYodpPzOo 00:23:53.729 [2024-11-19 23:48:27.931318] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.EtYodpPzOo': 0100666 00:23:53.729 [2024-11-19 23:48:27.931364] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:53.729 request: 00:23:53.729 { 00:23:53.729 "name": "key0", 00:23:53.729 "path": "/tmp/tmp.EtYodpPzOo", 00:23:53.729 "method": "keyring_file_add_key", 00:23:53.729 "req_id": 1 00:23:53.729 } 00:23:53.729 Got JSON-RPC error response 00:23:53.729 response: 00:23:53.729 { 00:23:53.729 "code": -1, 00:23:53.729 "message": "Operation not permitted" 00:23:53.729 } 00:23:53.729 23:48:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.987 [2024-11-19 23:48:28.192087] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:53.987 [2024-11-19 23:48:28.192151] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:53.987 request: 00:23:53.987 { 00:23:53.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.987 "host": "nqn.2016-06.io.spdk:host1", 00:23:53.987 "psk": "key0", 00:23:53.987 "method": "nvmf_subsystem_add_host", 00:23:53.987 "req_id": 1 00:23:53.987 } 00:23:53.987 Got JSON-RPC error response 00:23:53.987 response: 00:23:53.987 { 00:23:53.987 "code": -32603, 00:23:53.987 "message": "Internal error" 00:23:53.987 } 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 216296 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 216296 ']' 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 216296 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216296 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216296' 00:23:53.987 killing process with pid 216296 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 216296 00:23:53.987 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 216296 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.EtYodpPzOo 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=216715 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 216715 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 216715 ']' 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.245 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.245 [2024-11-19 23:48:28.509869] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:54.245 [2024-11-19 23:48:28.509952] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.503 [2024-11-19 23:48:28.580484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.503 [2024-11-19 23:48:28.624026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.503 [2024-11-19 23:48:28.624090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.503 [2024-11-19 23:48:28.624119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.503 [2024-11-19 23:48:28.624130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.503 [2024-11-19 23:48:28.624139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.503 [2024-11-19 23:48:28.624710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.503 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.503 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.503 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.503 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.503 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.503 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.503 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.EtYodpPzOo 00:23:54.503 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EtYodpPzOo 00:23:54.503 23:48:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.761 [2024-11-19 23:48:28.999846] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.761 23:48:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:55.017 23:48:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:55.274 [2024-11-19 23:48:29.525232] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.274 [2024-11-19 23:48:29.525523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.274 23:48:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:55.531 malloc0 00:23:55.531 23:48:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:55.789 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EtYodpPzOo 00:23:56.355 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:56.355 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=216975 00:23:56.355 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:56.355 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.355 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 216975 /var/tmp/bdevperf.sock 00:23:56.355 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 216975 ']' 00:23:56.355 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.355 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.355 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.355 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.355 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.613 [2024-11-19 23:48:30.686393] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:56.613 [2024-11-19 23:48:30.686471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid216975 ] 00:23:56.613 [2024-11-19 23:48:30.751267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.613 [2024-11-19 23:48:30.796312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.613 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.613 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:56.613 23:48:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EtYodpPzOo 00:23:57.178 23:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:57.178 [2024-11-19 23:48:31.423311] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.437 TLSTESTn1 00:23:57.437 23:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:57.695 23:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:57.695 "subsystems": [ 00:23:57.695 { 00:23:57.695 "subsystem": "keyring", 00:23:57.695 "config": [ 00:23:57.695 { 00:23:57.695 "method": "keyring_file_add_key", 00:23:57.695 "params": { 00:23:57.695 "name": "key0", 00:23:57.695 "path": "/tmp/tmp.EtYodpPzOo" 00:23:57.695 } 00:23:57.695 } 00:23:57.695 ] 00:23:57.695 }, 00:23:57.695 { 00:23:57.695 "subsystem": "iobuf", 00:23:57.695 "config": [ 00:23:57.695 { 00:23:57.695 "method": "iobuf_set_options", 00:23:57.695 "params": { 00:23:57.695 "small_pool_count": 8192, 00:23:57.695 "large_pool_count": 1024, 00:23:57.695 "small_bufsize": 8192, 00:23:57.695 "large_bufsize": 135168, 00:23:57.695 "enable_numa": false 00:23:57.695 } 00:23:57.695 } 00:23:57.695 ] 00:23:57.695 }, 00:23:57.695 { 00:23:57.695 "subsystem": "sock", 00:23:57.695 "config": [ 00:23:57.695 { 00:23:57.695 "method": "sock_set_default_impl", 00:23:57.695 "params": { 00:23:57.695 "impl_name": "posix" 00:23:57.695 } 00:23:57.695 }, 00:23:57.695 { 00:23:57.695 "method": "sock_impl_set_options", 00:23:57.695 "params": { 00:23:57.695 "impl_name": "ssl", 00:23:57.695 "recv_buf_size": 4096, 00:23:57.695 "send_buf_size": 4096, 00:23:57.695 "enable_recv_pipe": true, 00:23:57.695 "enable_quickack": false, 00:23:57.695 "enable_placement_id": 0, 00:23:57.695 "enable_zerocopy_send_server": true, 00:23:57.695 "enable_zerocopy_send_client": false, 00:23:57.695 "zerocopy_threshold": 0, 00:23:57.695 "tls_version": 0, 00:23:57.695 "enable_ktls": false 00:23:57.695 } 00:23:57.695 }, 00:23:57.695 { 00:23:57.695 "method": "sock_impl_set_options", 00:23:57.695 "params": { 00:23:57.695 "impl_name": "posix", 00:23:57.695 "recv_buf_size": 2097152, 00:23:57.695 "send_buf_size": 2097152, 00:23:57.695 "enable_recv_pipe": true, 00:23:57.695 "enable_quickack": false, 00:23:57.695 "enable_placement_id": 0, 00:23:57.695 "enable_zerocopy_send_server": true, 00:23:57.695 "enable_zerocopy_send_client": false, 00:23:57.695 "zerocopy_threshold": 0, 00:23:57.695 "tls_version": 0, 00:23:57.695 "enable_ktls": false 00:23:57.695 } 00:23:57.695 } 00:23:57.695 ] 00:23:57.695 }, 00:23:57.695 { 00:23:57.695 "subsystem": "vmd", 00:23:57.695 "config": [] 00:23:57.695 }, 00:23:57.695 { 00:23:57.695 "subsystem": "accel", 00:23:57.695 "config": [ 00:23:57.695 { 00:23:57.695 "method": "accel_set_options", 00:23:57.695 "params": { 00:23:57.695 "small_cache_size": 128, 00:23:57.695 "large_cache_size": 16, 00:23:57.695 "task_count": 2048, 00:23:57.695 "sequence_count": 2048, 00:23:57.695 "buf_count": 2048 00:23:57.695 } 00:23:57.695 } 00:23:57.695 ] 00:23:57.695 }, 00:23:57.695 { 00:23:57.695 "subsystem": "bdev", 00:23:57.695 "config": [ 00:23:57.695 { 00:23:57.695 "method": "bdev_set_options", 00:23:57.695 "params": { 00:23:57.695 "bdev_io_pool_size": 65535, 00:23:57.695 "bdev_io_cache_size": 256, 00:23:57.695 "bdev_auto_examine": true, 00:23:57.695 "iobuf_small_cache_size": 128, 00:23:57.695 "iobuf_large_cache_size": 16 00:23:57.695 } 00:23:57.695 }, 00:23:57.695 { 00:23:57.695 "method": "bdev_raid_set_options", 00:23:57.695 "params": { 00:23:57.695 "process_window_size_kb": 1024, 00:23:57.695 "process_max_bandwidth_mb_sec": 0 00:23:57.695 } 00:23:57.695 }, 00:23:57.695 { 00:23:57.695 "method": "bdev_iscsi_set_options", 00:23:57.695 "params": { 00:23:57.695 "timeout_sec": 30 00:23:57.695 } 00:23:57.695 }, 00:23:57.695 { 00:23:57.695 "method": "bdev_nvme_set_options", 00:23:57.695 "params": { 00:23:57.695 "action_on_timeout": "none", 00:23:57.695 "timeout_us": 0, 00:23:57.695 "timeout_admin_us": 0, 00:23:57.695 "keep_alive_timeout_ms": 10000, 00:23:57.695 "arbitration_burst": 0, 00:23:57.695 "low_priority_weight": 0, 00:23:57.695 "medium_priority_weight": 0, 00:23:57.695 "high_priority_weight": 0, 00:23:57.696 "nvme_adminq_poll_period_us": 10000, 00:23:57.696 "nvme_ioq_poll_period_us": 0, 00:23:57.696 "io_queue_requests": 0, 00:23:57.696 "delay_cmd_submit": true, 00:23:57.696 "transport_retry_count": 4, 00:23:57.696 "bdev_retry_count": 3, 00:23:57.696 "transport_ack_timeout": 0, 00:23:57.696 "ctrlr_loss_timeout_sec": 0, 00:23:57.696 "reconnect_delay_sec": 0, 00:23:57.696 "fast_io_fail_timeout_sec": 0, 00:23:57.696 "disable_auto_failback": false, 00:23:57.696 "generate_uuids": false, 00:23:57.696 "transport_tos": 0, 00:23:57.696 "nvme_error_stat": false, 00:23:57.696 "rdma_srq_size": 0, 00:23:57.696 "io_path_stat": false, 00:23:57.696 "allow_accel_sequence": false, 00:23:57.696 "rdma_max_cq_size": 0, 00:23:57.696 "rdma_cm_event_timeout_ms": 0, 00:23:57.696 "dhchap_digests": [ 00:23:57.696 "sha256", 00:23:57.696 "sha384", 00:23:57.696 "sha512" 00:23:57.696 ], 00:23:57.696 "dhchap_dhgroups": [ 00:23:57.696 "null", 00:23:57.696 "ffdhe2048", 00:23:57.696 "ffdhe3072", 00:23:57.696 "ffdhe4096", 00:23:57.696 "ffdhe6144", 00:23:57.696 "ffdhe8192" 00:23:57.696 ] 00:23:57.696 } 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "method": "bdev_nvme_set_hotplug", 00:23:57.696 "params": { 00:23:57.696 "period_us": 100000, 00:23:57.696 "enable": false 00:23:57.696 } 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "method": "bdev_malloc_create", 00:23:57.696 "params": { 00:23:57.696 "name": "malloc0", 00:23:57.696 "num_blocks": 8192, 00:23:57.696 "block_size": 4096, 00:23:57.696 "physical_block_size": 4096, 00:23:57.696 "uuid": "a6986c96-a1de-4931-bba7-b4de2040b8f2", 00:23:57.696 "optimal_io_boundary": 0, 00:23:57.696 "md_size": 0, 00:23:57.696 "dif_type": 0, 00:23:57.696 "dif_is_head_of_md": false, 00:23:57.696 "dif_pi_format": 0 00:23:57.696 } 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "method": "bdev_wait_for_examine" 00:23:57.696 } 00:23:57.696 ] 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "subsystem": "nbd", 00:23:57.696 "config": [] 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "subsystem": "scheduler", 00:23:57.696 "config": [ 00:23:57.696 { 00:23:57.696 "method": "framework_set_scheduler", 00:23:57.696 "params": { 00:23:57.696 "name": "static" 00:23:57.696 } 00:23:57.696 } 00:23:57.696 ] 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "subsystem": "nvmf", 00:23:57.696 "config": [ 00:23:57.696 { 00:23:57.696 "method": "nvmf_set_config", 00:23:57.696 "params": { 00:23:57.696 "discovery_filter": "match_any", 00:23:57.696 "admin_cmd_passthru": { 00:23:57.696 "identify_ctrlr": false 00:23:57.696 }, 00:23:57.696 "dhchap_digests": [ 00:23:57.696 "sha256", 00:23:57.696 "sha384", 00:23:57.696 "sha512" 00:23:57.696 ], 00:23:57.696 "dhchap_dhgroups": [ 00:23:57.696 "null", 00:23:57.696 "ffdhe2048", 00:23:57.696 "ffdhe3072", 00:23:57.696 "ffdhe4096", 00:23:57.696 "ffdhe6144", 00:23:57.696 "ffdhe8192" 00:23:57.696 ] 00:23:57.696 } 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "method": "nvmf_set_max_subsystems", 00:23:57.696 "params": { 00:23:57.696 "max_subsystems": 1024 00:23:57.696 } 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "method": "nvmf_set_crdt", 00:23:57.696 "params": { 00:23:57.696 "crdt1": 0, 00:23:57.696 "crdt2": 0, 00:23:57.696 "crdt3": 0 00:23:57.696 } 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "method": "nvmf_create_transport", 00:23:57.696 "params": { 00:23:57.696 "trtype": "TCP", 00:23:57.696 "max_queue_depth": 128, 00:23:57.696 "max_io_qpairs_per_ctrlr": 127, 00:23:57.696 "in_capsule_data_size": 4096, 00:23:57.696 "max_io_size": 131072, 00:23:57.696 "io_unit_size": 131072, 00:23:57.696 "max_aq_depth": 128, 00:23:57.696 "num_shared_buffers": 511, 00:23:57.696 "buf_cache_size": 4294967295, 00:23:57.696 "dif_insert_or_strip": false, 00:23:57.696 "zcopy": false, 00:23:57.696 "c2h_success": false, 00:23:57.696 "sock_priority": 0, 00:23:57.696 "abort_timeout_sec": 1, 00:23:57.696 "ack_timeout": 0, 00:23:57.696 "data_wr_pool_size": 0 00:23:57.696 } 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "method": "nvmf_create_subsystem", 00:23:57.696 "params": { 00:23:57.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.696 "allow_any_host": false, 00:23:57.696 "serial_number": "SPDK00000000000001", 00:23:57.696 "model_number": "SPDK bdev Controller", 00:23:57.696 "max_namespaces": 10, 00:23:57.696 "min_cntlid": 1, 00:23:57.696 "max_cntlid": 65519, 00:23:57.696 "ana_reporting": false 00:23:57.696 } 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "method": "nvmf_subsystem_add_host", 00:23:57.696 "params": { 00:23:57.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.696 "host": "nqn.2016-06.io.spdk:host1", 00:23:57.696 "psk": "key0" 00:23:57.696 } 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "method": "nvmf_subsystem_add_ns", 00:23:57.696 "params": { 00:23:57.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.696 "namespace": { 00:23:57.696 "nsid": 1, 00:23:57.696 "bdev_name": "malloc0", 00:23:57.696 "nguid": "A6986C96A1DE4931BBA7B4DE2040B8F2", 00:23:57.696 "uuid": "a6986c96-a1de-4931-bba7-b4de2040b8f2", 00:23:57.696 "no_auto_visible": false 00:23:57.696 } 00:23:57.696 } 00:23:57.696 }, 00:23:57.696 { 00:23:57.696 "method": "nvmf_subsystem_add_listener", 00:23:57.696 "params": { 00:23:57.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.696 "listen_address": { 00:23:57.696 "trtype": "TCP", 00:23:57.696 "adrfam": "IPv4", 00:23:57.696 "traddr": "10.0.0.2", 00:23:57.696 "trsvcid": "4420" 00:23:57.696 }, 00:23:57.696 "secure_channel": true 00:23:57.696 } 00:23:57.696 } 00:23:57.696 ] 00:23:57.696 } 00:23:57.696 ] 00:23:57.696 }' 00:23:57.696 23:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:57.955 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:57.955 "subsystems": [ 00:23:57.955 { 00:23:57.955 "subsystem": "keyring", 00:23:57.955 "config": [ 00:23:57.955 { 00:23:57.955 "method": "keyring_file_add_key", 00:23:57.955 "params": { 00:23:57.955 "name": "key0", 00:23:57.955 "path": "/tmp/tmp.EtYodpPzOo" 00:23:57.955 } 00:23:57.955 } 00:23:57.955 ] 00:23:57.955 }, 00:23:57.955 { 00:23:57.955 "subsystem": "iobuf", 00:23:57.955 "config": [ 00:23:57.955 { 00:23:57.955 "method": "iobuf_set_options", 00:23:57.955 "params": { 00:23:57.955 "small_pool_count": 8192, 00:23:57.955 "large_pool_count": 1024, 00:23:57.955 "small_bufsize": 8192, 00:23:57.955 "large_bufsize": 135168, 00:23:57.955 "enable_numa": false 00:23:57.955 } 00:23:57.955 } 00:23:57.955 ] 00:23:57.955 }, 00:23:57.955 { 00:23:57.955 "subsystem": "sock", 00:23:57.955 "config": [ 00:23:57.955 { 00:23:57.955 "method": "sock_set_default_impl", 00:23:57.955 "params": { 00:23:57.955 "impl_name": "posix" 00:23:57.955 } 00:23:57.955 }, 00:23:57.955 { 00:23:57.955 "method": "sock_impl_set_options", 00:23:57.955 "params": { 00:23:57.955 "impl_name": "ssl", 00:23:57.955 "recv_buf_size": 4096, 00:23:57.955 "send_buf_size": 4096, 00:23:57.955 "enable_recv_pipe": true, 00:23:57.955 "enable_quickack": false, 00:23:57.955 "enable_placement_id": 0, 00:23:57.955 "enable_zerocopy_send_server": true, 00:23:57.955 "enable_zerocopy_send_client": false, 00:23:57.955 "zerocopy_threshold": 0, 00:23:57.955 "tls_version": 0, 00:23:57.955 "enable_ktls": false 00:23:57.955 } 00:23:57.955 }, 00:23:57.955 { 00:23:57.955 "method": "sock_impl_set_options", 00:23:57.955 "params": { 00:23:57.955 "impl_name": "posix", 00:23:57.955 "recv_buf_size": 2097152, 00:23:57.955 "send_buf_size": 2097152, 00:23:57.955 "enable_recv_pipe": true, 00:23:57.955 "enable_quickack": false, 00:23:57.955 "enable_placement_id": 0, 00:23:57.955 "enable_zerocopy_send_server": true, 00:23:57.955 "enable_zerocopy_send_client": false, 00:23:57.955 "zerocopy_threshold": 0, 00:23:57.955 "tls_version": 0, 00:23:57.955 "enable_ktls": false 00:23:57.955 } 00:23:57.955 } 00:23:57.955 ] 00:23:57.955 }, 00:23:57.955 { 00:23:57.955 "subsystem": "vmd", 00:23:57.955 "config": [] 00:23:57.955 }, 00:23:57.955 { 00:23:57.955 "subsystem": "accel", 00:23:57.955 "config": [ 00:23:57.955 { 00:23:57.955 "method": "accel_set_options", 00:23:57.955 "params": { 00:23:57.955 "small_cache_size": 128, 00:23:57.955 "large_cache_size": 16, 00:23:57.955 "task_count": 2048, 00:23:57.955 "sequence_count": 2048, 00:23:57.955 "buf_count": 2048 00:23:57.955 } 00:23:57.955 } 00:23:57.955 ] 00:23:57.955 }, 00:23:57.955 { 00:23:57.955 "subsystem": "bdev", 00:23:57.955 "config": [ 00:23:57.955 { 00:23:57.955 "method": "bdev_set_options", 00:23:57.955 "params": { 00:23:57.955 "bdev_io_pool_size": 65535, 00:23:57.955 "bdev_io_cache_size": 256, 00:23:57.955 "bdev_auto_examine": true, 00:23:57.955 "iobuf_small_cache_size": 128, 00:23:57.955 "iobuf_large_cache_size": 16 00:23:57.955 } 00:23:57.955 }, 00:23:57.955 { 00:23:57.955 "method": "bdev_raid_set_options", 00:23:57.955 "params": { 00:23:57.955 "process_window_size_kb": 1024, 00:23:57.955 "process_max_bandwidth_mb_sec": 0 00:23:57.955 } 00:23:57.955 }, 00:23:57.955 { 00:23:57.955 "method": "bdev_iscsi_set_options", 00:23:57.955 "params": { 00:23:57.955 "timeout_sec": 30 00:23:57.955 } 00:23:57.955 }, 00:23:57.955 { 00:23:57.955 "method": "bdev_nvme_set_options", 00:23:57.955 "params": { 00:23:57.955 "action_on_timeout": "none", 00:23:57.955 "timeout_us": 0, 00:23:57.955 "timeout_admin_us": 0, 00:23:57.955 "keep_alive_timeout_ms": 10000, 00:23:57.955 "arbitration_burst": 0, 00:23:57.955 "low_priority_weight": 0, 00:23:57.955 "medium_priority_weight": 0, 00:23:57.955 "high_priority_weight": 0, 00:23:57.955 "nvme_adminq_poll_period_us": 10000, 00:23:57.955 "nvme_ioq_poll_period_us": 0, 00:23:57.955 "io_queue_requests": 512, 00:23:57.955 "delay_cmd_submit": true, 00:23:57.955 "transport_retry_count": 4, 00:23:57.955 "bdev_retry_count": 3, 00:23:57.955 "transport_ack_timeout": 0, 00:23:57.955 "ctrlr_loss_timeout_sec": 0, 00:23:57.955 "reconnect_delay_sec": 0, 00:23:57.955 "fast_io_fail_timeout_sec": 0, 00:23:57.955 "disable_auto_failback": false, 00:23:57.955 "generate_uuids": false, 00:23:57.955 "transport_tos": 0, 00:23:57.955 "nvme_error_stat": false, 00:23:57.955 "rdma_srq_size": 0, 00:23:57.955 "io_path_stat": false, 00:23:57.955 "allow_accel_sequence": false, 00:23:57.955 "rdma_max_cq_size": 0, 00:23:57.955 "rdma_cm_event_timeout_ms": 0, 00:23:57.955 "dhchap_digests": [ 00:23:57.955 "sha256", 00:23:57.955 "sha384", 00:23:57.955 "sha512" 00:23:57.955 ], 00:23:57.955 "dhchap_dhgroups": [ 00:23:57.955 "null", 00:23:57.955 "ffdhe2048", 00:23:57.955 "ffdhe3072", 00:23:57.955 "ffdhe4096", 00:23:57.955 "ffdhe6144", 00:23:57.955 "ffdhe8192" 00:23:57.955 ] 00:23:57.955 } 00:23:57.955 }, 00:23:57.955 { 00:23:57.955 "method": "bdev_nvme_attach_controller", 00:23:57.955 "params": { 00:23:57.955 "name": "TLSTEST", 00:23:57.955 "trtype": "TCP", 00:23:57.955 "adrfam": "IPv4", 00:23:57.955 "traddr": "10.0.0.2", 00:23:57.955 "trsvcid": "4420", 00:23:57.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.955 "prchk_reftag": false, 00:23:57.955 "prchk_guard": false, 00:23:57.955 "ctrlr_loss_timeout_sec": 0, 00:23:57.955 "reconnect_delay_sec": 0, 00:23:57.955 "fast_io_fail_timeout_sec": 0, 00:23:57.955 "psk": "key0", 00:23:57.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.956 "hdgst": false, 00:23:57.956 "ddgst": false, 00:23:57.956 "multipath": "multipath" 00:23:57.956 } 00:23:57.956 }, 00:23:57.956 { 00:23:57.956 "method": "bdev_nvme_set_hotplug", 00:23:57.956 "params": { 00:23:57.956 "period_us": 100000, 00:23:57.956 "enable": false 00:23:57.956 } 00:23:57.956 }, 00:23:57.956 { 00:23:57.956 "method": "bdev_wait_for_examine" 00:23:57.956 } 00:23:57.956 ] 00:23:57.956 }, 00:23:57.956 { 00:23:57.956 "subsystem": "nbd", 00:23:57.956 "config": [] 00:23:57.956 } 00:23:57.956 ] 00:23:57.956 }' 00:23:57.956 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 216975 00:23:57.956 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 216975 ']' 00:23:57.956 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 216975 00:23:57.956 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:57.956 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.956 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216975 00:23:57.956 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:57.956 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:57.956 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216975' 00:23:57.956 killing process with pid 216975 00:23:57.956 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 216975 00:23:57.956 Received shutdown signal, test time was about 10.000000 seconds 00:23:57.956 00:23:57.956 Latency(us) 00:23:57.956 [2024-11-19T22:48:32.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.956 [2024-11-19T22:48:32.268Z] =================================================================================================================== 00:23:57.956 [2024-11-19T22:48:32.268Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:57.956 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 216975 00:23:58.214 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 216715 00:23:58.214 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 216715 ']' 00:23:58.214 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 216715 00:23:58.214 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:58.214 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.214 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216715 00:23:58.214 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:58.214 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:58.214 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216715' 00:23:58.214 killing process with pid 216715 00:23:58.214 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 216715 00:23:58.214 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 216715 00:23:58.473 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:58.473 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:58.473 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:58.473 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:58.473 "subsystems": [ 00:23:58.473 { 00:23:58.473 "subsystem": "keyring", 00:23:58.473 "config": [ 00:23:58.473 { 00:23:58.473 "method": "keyring_file_add_key", 00:23:58.473 "params": { 00:23:58.473 "name": "key0", 00:23:58.473 "path": "/tmp/tmp.EtYodpPzOo" 00:23:58.473 } 00:23:58.473 } 00:23:58.473 ] 00:23:58.473 }, 00:23:58.473 { 00:23:58.473 "subsystem": "iobuf", 00:23:58.473 "config": [ 00:23:58.473 { 00:23:58.473 "method": "iobuf_set_options", 00:23:58.473 "params": { 00:23:58.473 "small_pool_count": 8192, 00:23:58.473 "large_pool_count": 1024, 00:23:58.473 "small_bufsize": 8192, 00:23:58.473 "large_bufsize": 135168, 00:23:58.473 "enable_numa": false 00:23:58.473 } 00:23:58.473 } 00:23:58.473 ] 00:23:58.473 }, 00:23:58.473 { 00:23:58.473 "subsystem": "sock", 00:23:58.473 "config": [ 00:23:58.473 { 00:23:58.473 "method": "sock_set_default_impl", 00:23:58.473 "params": { 00:23:58.473 "impl_name": "posix" 00:23:58.473 } 00:23:58.473 }, 00:23:58.473 { 00:23:58.473 "method": "sock_impl_set_options", 00:23:58.473 "params": { 00:23:58.473 "impl_name": "ssl", 00:23:58.473 "recv_buf_size": 4096, 00:23:58.473 "send_buf_size": 4096, 00:23:58.473 "enable_recv_pipe": true, 00:23:58.473 "enable_quickack": false, 00:23:58.473 "enable_placement_id": 0, 00:23:58.473 "enable_zerocopy_send_server": true, 00:23:58.473 "enable_zerocopy_send_client": false, 00:23:58.473 "zerocopy_threshold": 0, 00:23:58.473 "tls_version": 0, 00:23:58.473 "enable_ktls": false 00:23:58.473 } 00:23:58.473 }, 00:23:58.473 { 00:23:58.473 "method": "sock_impl_set_options", 00:23:58.473 "params": { 00:23:58.473 "impl_name": "posix", 00:23:58.473 "recv_buf_size": 2097152, 00:23:58.473 "send_buf_size": 2097152, 00:23:58.473 "enable_recv_pipe": true, 00:23:58.473 "enable_quickack": false, 00:23:58.473 "enable_placement_id": 0, 00:23:58.473 "enable_zerocopy_send_server": true, 00:23:58.473 "enable_zerocopy_send_client": false, 00:23:58.473 "zerocopy_threshold": 0, 00:23:58.473 "tls_version": 0, 00:23:58.473 "enable_ktls": false 00:23:58.473 } 00:23:58.473 } 00:23:58.473 ] 00:23:58.473 }, 00:23:58.473 { 00:23:58.473 "subsystem": "vmd", 00:23:58.473 "config": [] 00:23:58.473 }, 00:23:58.473 { 00:23:58.473 "subsystem": "accel", 00:23:58.473 "config": [ 00:23:58.473 { 00:23:58.473 "method": "accel_set_options", 00:23:58.473 "params": { 00:23:58.473 "small_cache_size": 128, 00:23:58.473 "large_cache_size": 16, 00:23:58.473 "task_count": 2048, 00:23:58.473 "sequence_count": 2048, 00:23:58.473 "buf_count": 2048 00:23:58.473 } 00:23:58.473 } 00:23:58.473 ] 00:23:58.473 }, 00:23:58.473 { 00:23:58.473 "subsystem": "bdev", 00:23:58.473 "config": [ 00:23:58.473 { 00:23:58.473 "method": "bdev_set_options", 00:23:58.473 "params": { 00:23:58.473 "bdev_io_pool_size": 65535, 00:23:58.473 "bdev_io_cache_size": 256, 00:23:58.473 "bdev_auto_examine": true, 00:23:58.473 "iobuf_small_cache_size": 128, 00:23:58.473 "iobuf_large_cache_size": 16 00:23:58.473 } 00:23:58.473 }, 00:23:58.473 { 00:23:58.473 "method": "bdev_raid_set_options", 00:23:58.473 "params": { 00:23:58.473 "process_window_size_kb": 1024, 00:23:58.473 "process_max_bandwidth_mb_sec": 0 00:23:58.473 } 00:23:58.473 }, 00:23:58.473 { 00:23:58.473 "method": "bdev_iscsi_set_options", 00:23:58.473 "params": { 00:23:58.473 "timeout_sec": 30 00:23:58.473 } 00:23:58.473 }, 00:23:58.473 { 00:23:58.473 "method": "bdev_nvme_set_options", 00:23:58.473 "params": { 00:23:58.473 "action_on_timeout": "none", 00:23:58.473 "timeout_us": 0, 00:23:58.473 "timeout_admin_us": 0, 00:23:58.473 "keep_alive_timeout_ms": 10000, 00:23:58.473 "arbitration_burst": 0, 00:23:58.473 "low_priority_weight": 0, 00:23:58.473 "medium_priority_weight": 0, 00:23:58.473 "high_priority_weight": 0, 00:23:58.473 "nvme_adminq_poll_period_us": 10000, 00:23:58.473 "nvme_ioq_poll_period_us": 0, 00:23:58.473 "io_queue_requests": 0, 00:23:58.473 "delay_cmd_submit": true, 00:23:58.473 "transport_retry_count": 4, 00:23:58.473 "bdev_retry_count": 3, 00:23:58.473 "transport_ack_timeout": 0, 00:23:58.473 "ctrlr_loss_timeout_sec": 0, 00:23:58.473 "reconnect_delay_sec": 0, 00:23:58.473 "fast_io_fail_timeout_sec": 0, 00:23:58.473 "disable_auto_failback": false, 00:23:58.473 "generate_uuids": false, 00:23:58.473 "transport_tos": 0, 00:23:58.473 "nvme_error_stat": false, 00:23:58.473 "rdma_srq_size": 0, 00:23:58.473 "io_path_stat": false, 00:23:58.473 "allow_accel_sequence": false, 00:23:58.473 "rdma_max_cq_size": 0, 00:23:58.473 "rdma_cm_event_timeout_ms": 0, 00:23:58.473 "dhchap_digests": [ 00:23:58.473 "sha256", 00:23:58.473 "sha384", 00:23:58.473 "sha512" 00:23:58.473 ], 00:23:58.473 "dhchap_dhgroups": [ 00:23:58.473 "null", 00:23:58.473 "ffdhe2048", 00:23:58.473 "ffdhe3072", 00:23:58.473 "ffdhe4096", 00:23:58.473 "ffdhe6144", 00:23:58.473 "ffdhe8192" 00:23:58.473 ] 00:23:58.473 } 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "method": "bdev_nvme_set_hotplug", 00:23:58.474 "params": { 00:23:58.474 "period_us": 100000, 00:23:58.474 "enable": false 00:23:58.474 } 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "method": "bdev_malloc_create", 00:23:58.474 "params": { 00:23:58.474 "name": "malloc0", 00:23:58.474 "num_blocks": 8192, 00:23:58.474 "block_size": 4096, 00:23:58.474 "physical_block_size": 4096, 00:23:58.474 "uuid": "a6986c96-a1de-4931-bba7-b4de2040b8f2", 00:23:58.474 "optimal_io_boundary": 0, 00:23:58.474 "md_size": 0, 00:23:58.474 "dif_type": 0, 00:23:58.474 "dif_is_head_of_md": false, 00:23:58.474 "dif_pi_format": 0 00:23:58.474 } 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "method": "bdev_wait_for_examine" 00:23:58.474 } 00:23:58.474 ] 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "subsystem": "nbd", 00:23:58.474 "config": [] 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "subsystem": "scheduler", 00:23:58.474 "config": [ 00:23:58.474 { 00:23:58.474 "method": "framework_set_scheduler", 00:23:58.474 "params": { 00:23:58.474 "name": "static" 00:23:58.474 } 00:23:58.474 } 00:23:58.474 ] 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "subsystem": "nvmf", 00:23:58.474 "config": [ 00:23:58.474 { 00:23:58.474 "method": "nvmf_set_config", 00:23:58.474 "params": { 00:23:58.474 "discovery_filter": "match_any", 00:23:58.474 "admin_cmd_passthru": { 00:23:58.474 "identify_ctrlr": false 00:23:58.474 }, 00:23:58.474 "dhchap_digests": [ 00:23:58.474 "sha256", 00:23:58.474 "sha384", 00:23:58.474 "sha512" 00:23:58.474 ], 00:23:58.474 "dhchap_dhgroups": [ 00:23:58.474 "null", 00:23:58.474 "ffdhe2048", 00:23:58.474 "ffdhe3072", 00:23:58.474 "ffdhe4096", 00:23:58.474 "ffdhe6144", 00:23:58.474 "ffdhe8192" 00:23:58.474 ] 00:23:58.474 } 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "method": "nvmf_set_max_subsystems", 00:23:58.474 "params": { 00:23:58.474 "max_subsystems": 1024 00:23:58.474 } 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "method": "nvmf_set_crdt", 00:23:58.474 "params": { 00:23:58.474 "crdt1": 0, 00:23:58.474 "crdt2": 0, 00:23:58.474 "crdt3": 0 00:23:58.474 } 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "method": "nvmf_create_transport", 00:23:58.474 "params": { 00:23:58.474 "trtype": "TCP", 00:23:58.474 "max_queue_depth": 128, 00:23:58.474 "max_io_qpairs_per_ctrlr": 127, 00:23:58.474 "in_capsule_data_size": 4096, 00:23:58.474 "max_io_size": 131072, 00:23:58.474 "io_unit_size": 131072, 00:23:58.474 "max_aq_depth": 128, 00:23:58.474 "num_shared_buffers": 511, 00:23:58.474 "buf_cache_size": 4294967295, 00:23:58.474 "dif_insert_or_strip": false, 00:23:58.474 "zcopy": false, 00:23:58.474 "c2h_success": false, 00:23:58.474 "sock_priority": 0, 00:23:58.474 "abort_timeout_sec": 1, 00:23:58.474 "ack_timeout": 0, 00:23:58.474 "data_wr_pool_size": 0 00:23:58.474 } 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "method": "nvmf_create_subsystem", 00:23:58.474 "params": { 00:23:58.474 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.474 "allow_any_host": false, 00:23:58.474 "serial_number": "SPDK00000000000001", 00:23:58.474 "model_number": "SPDK bdev Controller", 00:23:58.474 "max_namespaces": 10, 00:23:58.474 "min_cntlid": 1, 00:23:58.474 "max_cntlid": 65519, 00:23:58.474 "ana_reporting": false 00:23:58.474 } 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "method": "nvmf_subsystem_add_host", 00:23:58.474 "params": { 00:23:58.474 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.474 "host": "nqn.2016-06.io.spdk:host1", 00:23:58.474 "psk": "key0" 00:23:58.474 } 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "method": "nvmf_subsystem_add_ns", 00:23:58.474 "params": { 00:23:58.474 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.474 "namespace": { 00:23:58.474 "nsid": 1, 00:23:58.474 "bdev_name": "malloc0", 00:23:58.474 "nguid": "A6986C96A1DE4931BBA7B4DE2040B8F2", 00:23:58.474 "uuid": "a6986c96-a1de-4931-bba7-b4de2040b8f2", 00:23:58.474 "no_auto_visible": false 00:23:58.474 } 00:23:58.474 } 00:23:58.474 }, 00:23:58.474 { 00:23:58.474 "method": "nvmf_subsystem_add_listener", 00:23:58.474 "params": { 00:23:58.474 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.474 "listen_address": { 00:23:58.474 "trtype": "TCP", 00:23:58.474 "adrfam": "IPv4", 00:23:58.474 "traddr": "10.0.0.2", 00:23:58.474 "trsvcid": "4420" 00:23:58.474 }, 00:23:58.474 "secure_channel": true 00:23:58.474 } 00:23:58.474 } 00:23:58.474 ] 00:23:58.474 } 00:23:58.474 ] 00:23:58.474 }' 00:23:58.474 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.474 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=217163 00:23:58.474 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:58.474 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 217163 00:23:58.474 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 217163 ']' 00:23:58.474 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.474 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.474 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.474 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.474 23:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.474 [2024-11-19 23:48:32.732183] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:58.474 [2024-11-19 23:48:32.732255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.733 [2024-11-19 23:48:32.802835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.733 [2024-11-19 23:48:32.849184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.733 [2024-11-19 23:48:32.849241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.733 [2024-11-19 23:48:32.849278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.733 [2024-11-19 23:48:32.849290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.733 [2024-11-19 23:48:32.849299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.733 [2024-11-19 23:48:32.849995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.991 [2024-11-19 23:48:33.097741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.991 [2024-11-19 23:48:33.129744] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.991 [2024-11-19 23:48:33.130050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.557 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.557 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:59.557 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.557 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.557 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.557 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.557 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=217316 00:23:59.557 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 217316 /var/tmp/bdevperf.sock 00:23:59.558 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 217316 ']' 00:23:59.558 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.558 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.558 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.558 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:59.558 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.558 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:59.558 "subsystems": [ 00:23:59.558 { 00:23:59.558 "subsystem": "keyring", 00:23:59.558 "config": [ 00:23:59.558 { 00:23:59.558 "method": "keyring_file_add_key", 00:23:59.558 "params": { 00:23:59.558 "name": "key0", 00:23:59.558 "path": "/tmp/tmp.EtYodpPzOo" 00:23:59.558 } 00:23:59.558 } 00:23:59.558 ] 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "subsystem": "iobuf", 00:23:59.558 "config": [ 00:23:59.558 { 00:23:59.558 "method": "iobuf_set_options", 00:23:59.558 "params": { 00:23:59.558 "small_pool_count": 8192, 00:23:59.558 "large_pool_count": 1024, 00:23:59.558 "small_bufsize": 8192, 00:23:59.558 "large_bufsize": 135168, 00:23:59.558 "enable_numa": false 00:23:59.558 } 00:23:59.558 } 00:23:59.558 ] 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "subsystem": "sock", 00:23:59.558 "config": [ 00:23:59.558 { 00:23:59.558 "method": "sock_set_default_impl", 00:23:59.558 "params": { 00:23:59.558 "impl_name": "posix" 00:23:59.558 } 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "method": "sock_impl_set_options", 00:23:59.558 "params": { 00:23:59.558 "impl_name": "ssl", 00:23:59.558 "recv_buf_size": 4096, 00:23:59.558 "send_buf_size": 4096, 00:23:59.558 "enable_recv_pipe": true, 00:23:59.558 "enable_quickack": false, 00:23:59.558 "enable_placement_id": 0, 00:23:59.558 "enable_zerocopy_send_server": true, 00:23:59.558 "enable_zerocopy_send_client": false, 00:23:59.558 "zerocopy_threshold": 0, 00:23:59.558 "tls_version": 0, 00:23:59.558 "enable_ktls": false 00:23:59.558 } 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "method": "sock_impl_set_options", 00:23:59.558 "params": { 00:23:59.558 "impl_name": "posix", 00:23:59.558 "recv_buf_size": 2097152, 00:23:59.558 "send_buf_size": 2097152, 00:23:59.558 "enable_recv_pipe": true, 00:23:59.558 "enable_quickack": false, 00:23:59.558 "enable_placement_id": 0, 00:23:59.558 "enable_zerocopy_send_server": true, 00:23:59.558 "enable_zerocopy_send_client": false, 00:23:59.558 "zerocopy_threshold": 0, 00:23:59.558 "tls_version": 0, 00:23:59.558 "enable_ktls": false 00:23:59.558 } 00:23:59.558 } 00:23:59.558 ] 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "subsystem": "vmd", 00:23:59.558 "config": [] 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "subsystem": "accel", 00:23:59.558 "config": [ 00:23:59.558 { 00:23:59.558 "method": "accel_set_options", 00:23:59.558 "params": { 00:23:59.558 "small_cache_size": 128, 00:23:59.558 "large_cache_size": 16, 00:23:59.558 "task_count": 2048, 00:23:59.558 "sequence_count": 2048, 00:23:59.558 "buf_count": 2048 00:23:59.558 } 00:23:59.558 } 00:23:59.558 ] 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "subsystem": "bdev", 00:23:59.558 "config": [ 00:23:59.558 { 00:23:59.558 "method": "bdev_set_options", 00:23:59.558 "params": { 00:23:59.558 "bdev_io_pool_size": 65535, 00:23:59.558 "bdev_io_cache_size": 256, 00:23:59.558 "bdev_auto_examine": true, 00:23:59.558 "iobuf_small_cache_size": 128, 00:23:59.558 "iobuf_large_cache_size": 16 00:23:59.558 } 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "method": "bdev_raid_set_options", 00:23:59.558 "params": { 00:23:59.558 "process_window_size_kb": 1024, 00:23:59.558 "process_max_bandwidth_mb_sec": 0 00:23:59.558 } 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "method": "bdev_iscsi_set_options", 00:23:59.558 "params": { 00:23:59.558 "timeout_sec": 30 00:23:59.558 } 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "method": "bdev_nvme_set_options", 00:23:59.558 "params": { 00:23:59.558 "action_on_timeout": "none", 00:23:59.558 "timeout_us": 0, 00:23:59.558 "timeout_admin_us": 0, 00:23:59.558 "keep_alive_timeout_ms": 10000, 00:23:59.558 "arbitration_burst": 0, 00:23:59.558 "low_priority_weight": 0, 00:23:59.558 "medium_priority_weight": 0, 00:23:59.558 "high_priority_weight": 0, 00:23:59.558 "nvme_adminq_poll_period_us": 10000, 00:23:59.558 "nvme_ioq_poll_period_us": 0, 00:23:59.558 "io_queue_requests": 512, 00:23:59.558 "delay_cmd_submit": true, 00:23:59.558 "transport_retry_count": 4, 00:23:59.558 "bdev_retry_count": 3, 00:23:59.558 "transport_ack_timeout": 0, 00:23:59.558 "ctrlr_loss_timeout_sec": 0, 00:23:59.558 "reconnect_delay_sec": 0, 00:23:59.558 "fast_io_fail_timeout_sec": 0, 00:23:59.558 "disable_auto_failback": false, 00:23:59.558 "generate_uuids": false, 00:23:59.558 "transport_tos": 0, 00:23:59.558 "nvme_error_stat": false, 00:23:59.558 "rdma_srq_size": 0, 00:23:59.558 "io_path_stat": false, 00:23:59.558 "allow_accel_sequence": false, 00:23:59.558 "rdma_max_cq_size": 0, 00:23:59.558 "rdma_cm_event_timeout_ms": 0, 00:23:59.558 "dhchap_digests": [ 00:23:59.558 "sha256", 00:23:59.558 "sha384", 00:23:59.558 "sha512" 00:23:59.558 ], 00:23:59.558 "dhchap_dhgroups": [ 00:23:59.558 "null", 00:23:59.558 "ffdhe2048", 00:23:59.558 "ffdhe3072", 00:23:59.558 "ffdhe4096", 00:23:59.558 "ffdhe6144", 00:23:59.558 "ffdhe8192" 00:23:59.558 ] 00:23:59.558 } 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "method": "bdev_nvme_attach_controller", 00:23:59.558 "params": { 00:23:59.558 "name": "TLSTEST", 00:23:59.558 "trtype": "TCP", 00:23:59.558 "adrfam": "IPv4", 00:23:59.558 "traddr": "10.0.0.2", 00:23:59.558 "trsvcid": "4420", 00:23:59.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.558 "prchk_reftag": false, 00:23:59.558 "prchk_guard": false, 00:23:59.558 "ctrlr_loss_timeout_sec": 0, 00:23:59.558 "reconnect_delay_sec": 0, 00:23:59.558 "fast_io_fail_timeout_sec": 0, 00:23:59.558 "psk": "key0", 00:23:59.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.558 "hdgst": false, 00:23:59.558 "ddgst": false, 00:23:59.558 "multipath": "multipath" 00:23:59.558 } 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "method": "bdev_nvme_set_hotplug", 00:23:59.558 "params": { 00:23:59.558 "period_us": 100000, 00:23:59.558 "enable": false 00:23:59.558 } 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "method": "bdev_wait_for_examine" 00:23:59.558 } 00:23:59.558 ] 00:23:59.558 }, 00:23:59.558 { 00:23:59.558 "subsystem": "nbd", 00:23:59.558 "config": [] 00:23:59.558 } 00:23:59.558 ] 00:23:59.558 }' 00:23:59.558 23:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.559 [2024-11-19 23:48:33.790769] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:23:59.559 [2024-11-19 23:48:33.790858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid217316 ] 00:23:59.559 [2024-11-19 23:48:33.855738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.817 [2024-11-19 23:48:33.901506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.817 [2024-11-19 23:48:34.075409] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.075 23:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.075 23:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:00.075 23:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:00.075 Running I/O for 10 seconds... 00:24:02.382 2950.00 IOPS, 11.52 MiB/s [2024-11-19T22:48:37.628Z] 3009.50 IOPS, 11.76 MiB/s [2024-11-19T22:48:38.560Z] 3026.00 IOPS, 11.82 MiB/s [2024-11-19T22:48:39.491Z] 3023.50 IOPS, 11.81 MiB/s [2024-11-19T22:48:40.423Z] 3022.60 IOPS, 11.81 MiB/s [2024-11-19T22:48:41.357Z] 3031.50 IOPS, 11.84 MiB/s [2024-11-19T22:48:42.729Z] 3031.43 IOPS, 11.84 MiB/s [2024-11-19T22:48:43.662Z] 3035.00 IOPS, 11.86 MiB/s [2024-11-19T22:48:44.595Z] 3041.11 IOPS, 11.88 MiB/s [2024-11-19T22:48:44.595Z] 3043.80 IOPS, 11.89 MiB/s 00:24:10.283 Latency(us) 00:24:10.283 [2024-11-19T22:48:44.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.283 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:10.283 Verification LBA range: start 0x0 length 0x2000 00:24:10.283 TLSTESTn1 : 10.02 3050.17 11.91 0.00 0.00 41891.29 7330.32 39224.51 00:24:10.283 [2024-11-19T22:48:44.595Z] =================================================================================================================== 00:24:10.283 [2024-11-19T22:48:44.595Z] Total : 3050.17 11.91 0.00 0.00 41891.29 7330.32 39224.51 00:24:10.283 { 00:24:10.283 "results": [ 00:24:10.283 { 00:24:10.283 "job": "TLSTESTn1", 00:24:10.283 "core_mask": "0x4", 00:24:10.283 "workload": "verify", 00:24:10.283 "status": "finished", 00:24:10.283 "verify_range": { 00:24:10.283 "start": 0, 00:24:10.283 "length": 8192 00:24:10.283 }, 00:24:10.283 "queue_depth": 128, 00:24:10.283 "io_size": 4096, 00:24:10.283 "runtime": 10.021097, 00:24:10.283 "iops": 3050.165066758659, 00:24:10.283 "mibps": 11.914707292026012, 00:24:10.283 "io_failed": 0, 00:24:10.283 "io_timeout": 0, 00:24:10.283 "avg_latency_us": 41891.29219209919, 00:24:10.283 "min_latency_us": 7330.322962962963, 00:24:10.283 "max_latency_us": 39224.50962962963 00:24:10.283 } 00:24:10.283 ], 00:24:10.283 "core_count": 1 00:24:10.283 } 00:24:10.283 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 217316 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 217316 ']' 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 217316 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217316 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217316' 00:24:10.284 killing process with pid 217316 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 217316 00:24:10.284 Received shutdown signal, test time was about 10.000000 seconds 00:24:10.284 00:24:10.284 Latency(us) 00:24:10.284 [2024-11-19T22:48:44.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.284 [2024-11-19T22:48:44.596Z] =================================================================================================================== 00:24:10.284 [2024-11-19T22:48:44.596Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 217316 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 217163 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 217163 ']' 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 217163 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:10.284 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217163 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217163' 00:24:10.542 killing process with pid 217163 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 217163 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 217163 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=218634 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 218634 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 218634 ']' 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.542 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.801 [2024-11-19 23:48:44.864471] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:24:10.801 [2024-11-19 23:48:44.864562] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.801 [2024-11-19 23:48:44.937502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.801 [2024-11-19 23:48:44.982111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.801 [2024-11-19 23:48:44.982163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.801 [2024-11-19 23:48:44.982191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.801 [2024-11-19 23:48:44.982203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.801 [2024-11-19 23:48:44.982213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.801 [2024-11-19 23:48:44.982802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.801 23:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.801 23:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:10.801 23:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.801 23:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.801 23:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.059 23:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.059 23:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.EtYodpPzOo 00:24:11.059 23:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EtYodpPzOo 00:24:11.059 23:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:11.316 [2024-11-19 23:48:45.422725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.316 23:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:11.575 23:48:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:11.874 [2024-11-19 23:48:46.032394] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:11.874 [2024-11-19 23:48:46.032670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.874 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:12.174 malloc0 00:24:12.174 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:12.431 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EtYodpPzOo 00:24:12.690 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:12.948 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=218925 00:24:12.948 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:12.948 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:12.948 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 218925 /var/tmp/bdevperf.sock 00:24:12.948 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 218925 ']' 00:24:12.948 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.948 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.948 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.948 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.948 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.948 [2024-11-19 23:48:47.241538] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:24:12.948 [2024-11-19 23:48:47.241621] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid218925 ] 00:24:13.206 [2024-11-19 23:48:47.311391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.206 [2024-11-19 23:48:47.360275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.206 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.206 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:13.206 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EtYodpPzOo 00:24:13.464 23:48:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:13.721 [2024-11-19 23:48:48.014545] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.979 nvme0n1 00:24:13.979 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:13.979 Running I/O for 1 seconds... 00:24:15.170 3409.00 IOPS, 13.32 MiB/s 00:24:15.170 Latency(us) 00:24:15.170 [2024-11-19T22:48:49.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.170 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:15.170 Verification LBA range: start 0x0 length 0x2000 00:24:15.170 nvme0n1 : 1.02 3470.58 13.56 0.00 0.00 36552.43 6699.24 26602.76 00:24:15.170 [2024-11-19T22:48:49.482Z] =================================================================================================================== 00:24:15.170 [2024-11-19T22:48:49.482Z] Total : 3470.58 13.56 0.00 0.00 36552.43 6699.24 26602.76 00:24:15.170 { 00:24:15.170 "results": [ 00:24:15.170 { 00:24:15.170 "job": "nvme0n1", 00:24:15.170 "core_mask": "0x2", 00:24:15.170 "workload": "verify", 00:24:15.170 "status": "finished", 00:24:15.170 "verify_range": { 00:24:15.170 "start": 0, 00:24:15.170 "length": 8192 00:24:15.170 }, 00:24:15.170 "queue_depth": 128, 00:24:15.170 "io_size": 4096, 00:24:15.170 "runtime": 1.019139, 00:24:15.170 "iops": 3470.5766338055946, 00:24:15.170 "mibps": 13.556939975803104, 00:24:15.170 "io_failed": 0, 00:24:15.170 "io_timeout": 0, 00:24:15.170 "avg_latency_us": 36552.43138797265, 00:24:15.170 "min_latency_us": 6699.235555555556, 00:24:15.170 "max_latency_us": 26602.76148148148 00:24:15.170 } 00:24:15.170 ], 00:24:15.170 "core_count": 1 00:24:15.170 } 00:24:15.170 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 218925 00:24:15.170 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 218925 ']' 00:24:15.170 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 218925 00:24:15.170 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:15.170 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.170 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 218925 00:24:15.170 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:15.170 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:15.170 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 218925' 00:24:15.170 killing process with pid 218925 00:24:15.170 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 218925 00:24:15.170 Received shutdown signal, test time was about 1.000000 seconds 00:24:15.170 00:24:15.170 Latency(us) 00:24:15.170 [2024-11-19T22:48:49.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.170 [2024-11-19T22:48:49.482Z] =================================================================================================================== 00:24:15.170 [2024-11-19T22:48:49.482Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.170 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 218925 00:24:15.429 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 218634 00:24:15.429 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 218634 ']' 00:24:15.429 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 218634 00:24:15.429 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:15.429 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.429 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 218634 00:24:15.429 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:15.429 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:15.429 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 218634' 00:24:15.429 killing process with pid 218634 00:24:15.429 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 218634 00:24:15.429 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 218634 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=219206 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 219206 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 219206 ']' 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.688 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.688 [2024-11-19 23:48:49.822243] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:24:15.688 [2024-11-19 23:48:49.822327] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.688 [2024-11-19 23:48:49.902701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.688 [2024-11-19 23:48:49.949123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.688 [2024-11-19 23:48:49.949220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.688 [2024-11-19 23:48:49.949236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.688 [2024-11-19 23:48:49.949250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.688 [2024-11-19 23:48:49.949271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.688 [2024-11-19 23:48:49.949932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.947 [2024-11-19 23:48:50.093442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.947 malloc0 00:24:15.947 [2024-11-19 23:48:50.125505] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:15.947 [2024-11-19 23:48:50.125783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=219345 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 219345 /var/tmp/bdevperf.sock 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 219345 ']' 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.947 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.947 [2024-11-19 23:48:50.200042] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:24:15.947 [2024-11-19 23:48:50.200148] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid219345 ] 00:24:16.205 [2024-11-19 23:48:50.273295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.205 [2024-11-19 23:48:50.322871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.205 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.205 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:16.205 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EtYodpPzOo 00:24:16.463 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:16.721 [2024-11-19 23:48:50.954018] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.722 nvme0n1 00:24:16.979 23:48:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:16.979 Running I/O for 1 seconds... 00:24:17.914 3390.00 IOPS, 13.24 MiB/s 00:24:17.914 Latency(us) 00:24:17.914 [2024-11-19T22:48:52.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.914 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:17.914 Verification LBA range: start 0x0 length 0x2000 00:24:17.914 nvme0n1 : 1.02 3439.82 13.44 0.00 0.00 36834.24 7670.14 40777.96 00:24:17.914 [2024-11-19T22:48:52.226Z] =================================================================================================================== 00:24:17.914 [2024-11-19T22:48:52.226Z] Total : 3439.82 13.44 0.00 0.00 36834.24 7670.14 40777.96 00:24:17.914 { 00:24:17.914 "results": [ 00:24:17.914 { 00:24:17.914 "job": "nvme0n1", 00:24:17.914 "core_mask": "0x2", 00:24:17.914 "workload": "verify", 00:24:17.914 "status": "finished", 00:24:17.914 "verify_range": { 00:24:17.914 "start": 0, 00:24:17.914 "length": 8192 00:24:17.914 }, 00:24:17.914 "queue_depth": 128, 00:24:17.914 "io_size": 4096, 00:24:17.914 "runtime": 1.022728, 00:24:17.914 "iops": 3439.819776128159, 00:24:17.914 "mibps": 13.436796000500621, 00:24:17.914 "io_failed": 0, 00:24:17.914 "io_timeout": 0, 00:24:17.914 "avg_latency_us": 36834.23882087887, 00:24:17.914 "min_latency_us": 7670.139259259259, 00:24:17.914 "max_latency_us": 40777.955555555556 00:24:17.914 } 00:24:17.914 ], 00:24:17.914 "core_count": 1 00:24:17.914 } 00:24:17.914 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:17.914 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.914 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.172 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.172 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:18.172 "subsystems": [ 00:24:18.172 { 00:24:18.172 "subsystem": "keyring", 00:24:18.172 "config": [ 00:24:18.172 { 00:24:18.172 "method": "keyring_file_add_key", 00:24:18.172 "params": { 00:24:18.172 "name": "key0", 00:24:18.172 "path": "/tmp/tmp.EtYodpPzOo" 00:24:18.172 } 00:24:18.172 } 00:24:18.172 ] 00:24:18.172 }, 00:24:18.172 { 00:24:18.172 "subsystem": "iobuf", 00:24:18.172 "config": [ 00:24:18.172 { 00:24:18.172 "method": "iobuf_set_options", 00:24:18.172 "params": { 00:24:18.172 "small_pool_count": 8192, 00:24:18.172 "large_pool_count": 1024, 00:24:18.172 "small_bufsize": 8192, 00:24:18.172 "large_bufsize": 135168, 00:24:18.172 "enable_numa": false 00:24:18.172 } 00:24:18.172 } 00:24:18.172 ] 00:24:18.172 }, 00:24:18.172 { 00:24:18.172 "subsystem": "sock", 00:24:18.172 "config": [ 00:24:18.172 { 00:24:18.172 "method": "sock_set_default_impl", 00:24:18.172 "params": { 00:24:18.172 "impl_name": "posix" 00:24:18.172 } 00:24:18.172 }, 00:24:18.172 { 00:24:18.172 "method": "sock_impl_set_options", 00:24:18.172 "params": { 00:24:18.172 "impl_name": "ssl", 00:24:18.172 "recv_buf_size": 4096, 00:24:18.172 "send_buf_size": 4096, 00:24:18.172 "enable_recv_pipe": true, 00:24:18.172 "enable_quickack": false, 00:24:18.172 "enable_placement_id": 0, 00:24:18.172 "enable_zerocopy_send_server": true, 00:24:18.172 "enable_zerocopy_send_client": false, 00:24:18.172 "zerocopy_threshold": 0, 00:24:18.172 "tls_version": 0, 00:24:18.172 "enable_ktls": false 00:24:18.172 } 00:24:18.172 }, 00:24:18.172 { 00:24:18.172 "method": "sock_impl_set_options", 00:24:18.172 "params": { 00:24:18.172 "impl_name": "posix", 00:24:18.172 "recv_buf_size": 2097152, 00:24:18.172 "send_buf_size": 2097152, 00:24:18.172 "enable_recv_pipe": true, 00:24:18.172 "enable_quickack": false, 00:24:18.172 "enable_placement_id": 0, 00:24:18.172 "enable_zerocopy_send_server": true, 00:24:18.172 "enable_zerocopy_send_client": false, 00:24:18.172 "zerocopy_threshold": 0, 00:24:18.172 "tls_version": 0, 00:24:18.172 "enable_ktls": false 00:24:18.172 } 00:24:18.172 } 00:24:18.172 ] 00:24:18.172 }, 00:24:18.172 { 00:24:18.172 "subsystem": "vmd", 00:24:18.172 "config": [] 00:24:18.172 }, 00:24:18.172 { 00:24:18.172 "subsystem": "accel", 00:24:18.172 "config": [ 00:24:18.172 { 00:24:18.172 "method": "accel_set_options", 00:24:18.172 "params": { 00:24:18.172 "small_cache_size": 128, 00:24:18.172 "large_cache_size": 16, 00:24:18.172 "task_count": 2048, 00:24:18.172 "sequence_count": 2048, 00:24:18.172 "buf_count": 2048 00:24:18.172 } 00:24:18.172 } 00:24:18.172 ] 00:24:18.172 }, 00:24:18.172 { 00:24:18.172 "subsystem": "bdev", 00:24:18.172 "config": [ 00:24:18.172 { 00:24:18.172 "method": "bdev_set_options", 00:24:18.172 "params": { 00:24:18.172 "bdev_io_pool_size": 65535, 00:24:18.173 "bdev_io_cache_size": 256, 00:24:18.173 "bdev_auto_examine": true, 00:24:18.173 "iobuf_small_cache_size": 128, 00:24:18.173 "iobuf_large_cache_size": 16 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "bdev_raid_set_options", 00:24:18.173 "params": { 00:24:18.173 "process_window_size_kb": 1024, 00:24:18.173 "process_max_bandwidth_mb_sec": 0 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "bdev_iscsi_set_options", 00:24:18.173 "params": { 00:24:18.173 "timeout_sec": 30 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "bdev_nvme_set_options", 00:24:18.173 "params": { 00:24:18.173 "action_on_timeout": "none", 00:24:18.173 "timeout_us": 0, 00:24:18.173 "timeout_admin_us": 0, 00:24:18.173 "keep_alive_timeout_ms": 10000, 00:24:18.173 "arbitration_burst": 0, 00:24:18.173 "low_priority_weight": 0, 00:24:18.173 "medium_priority_weight": 0, 00:24:18.173 "high_priority_weight": 0, 00:24:18.173 "nvme_adminq_poll_period_us": 10000, 00:24:18.173 "nvme_ioq_poll_period_us": 0, 00:24:18.173 "io_queue_requests": 0, 00:24:18.173 "delay_cmd_submit": true, 00:24:18.173 "transport_retry_count": 4, 00:24:18.173 "bdev_retry_count": 3, 00:24:18.173 "transport_ack_timeout": 0, 00:24:18.173 "ctrlr_loss_timeout_sec": 0, 00:24:18.173 "reconnect_delay_sec": 0, 00:24:18.173 "fast_io_fail_timeout_sec": 0, 00:24:18.173 "disable_auto_failback": false, 00:24:18.173 "generate_uuids": false, 00:24:18.173 "transport_tos": 0, 00:24:18.173 "nvme_error_stat": false, 00:24:18.173 "rdma_srq_size": 0, 00:24:18.173 "io_path_stat": false, 00:24:18.173 "allow_accel_sequence": false, 00:24:18.173 "rdma_max_cq_size": 0, 00:24:18.173 "rdma_cm_event_timeout_ms": 0, 00:24:18.173 "dhchap_digests": [ 00:24:18.173 "sha256", 00:24:18.173 "sha384", 00:24:18.173 "sha512" 00:24:18.173 ], 00:24:18.173 "dhchap_dhgroups": [ 00:24:18.173 "null", 00:24:18.173 "ffdhe2048", 00:24:18.173 "ffdhe3072", 00:24:18.173 "ffdhe4096", 00:24:18.173 "ffdhe6144", 00:24:18.173 "ffdhe8192" 00:24:18.173 ] 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "bdev_nvme_set_hotplug", 00:24:18.173 "params": { 00:24:18.173 "period_us": 100000, 00:24:18.173 "enable": false 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "bdev_malloc_create", 00:24:18.173 "params": { 00:24:18.173 "name": "malloc0", 00:24:18.173 "num_blocks": 8192, 00:24:18.173 "block_size": 4096, 00:24:18.173 "physical_block_size": 4096, 00:24:18.173 "uuid": "1d7b20a9-ca7f-4dc1-a0f9-0075cb981a82", 00:24:18.173 "optimal_io_boundary": 0, 00:24:18.173 "md_size": 0, 00:24:18.173 "dif_type": 0, 00:24:18.173 "dif_is_head_of_md": false, 00:24:18.173 "dif_pi_format": 0 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "bdev_wait_for_examine" 00:24:18.173 } 00:24:18.173 ] 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "subsystem": "nbd", 00:24:18.173 "config": [] 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "subsystem": "scheduler", 00:24:18.173 "config": [ 00:24:18.173 { 00:24:18.173 "method": "framework_set_scheduler", 00:24:18.173 "params": { 00:24:18.173 "name": "static" 00:24:18.173 } 00:24:18.173 } 00:24:18.173 ] 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "subsystem": "nvmf", 00:24:18.173 "config": [ 00:24:18.173 { 00:24:18.173 "method": "nvmf_set_config", 00:24:18.173 "params": { 00:24:18.173 "discovery_filter": "match_any", 00:24:18.173 "admin_cmd_passthru": { 00:24:18.173 "identify_ctrlr": false 00:24:18.173 }, 00:24:18.173 "dhchap_digests": [ 00:24:18.173 "sha256", 00:24:18.173 "sha384", 00:24:18.173 "sha512" 00:24:18.173 ], 00:24:18.173 "dhchap_dhgroups": [ 00:24:18.173 "null", 00:24:18.173 "ffdhe2048", 00:24:18.173 "ffdhe3072", 00:24:18.173 "ffdhe4096", 00:24:18.173 "ffdhe6144", 00:24:18.173 "ffdhe8192" 00:24:18.173 ] 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "nvmf_set_max_subsystems", 00:24:18.173 "params": { 00:24:18.173 "max_subsystems": 1024 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "nvmf_set_crdt", 00:24:18.173 "params": { 00:24:18.173 "crdt1": 0, 00:24:18.173 "crdt2": 0, 00:24:18.173 "crdt3": 0 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "nvmf_create_transport", 00:24:18.173 "params": { 00:24:18.173 "trtype": "TCP", 00:24:18.173 "max_queue_depth": 128, 00:24:18.173 "max_io_qpairs_per_ctrlr": 127, 00:24:18.173 "in_capsule_data_size": 4096, 00:24:18.173 "max_io_size": 131072, 00:24:18.173 "io_unit_size": 131072, 00:24:18.173 "max_aq_depth": 128, 00:24:18.173 "num_shared_buffers": 511, 00:24:18.173 "buf_cache_size": 4294967295, 00:24:18.173 "dif_insert_or_strip": false, 00:24:18.173 "zcopy": false, 00:24:18.173 "c2h_success": false, 00:24:18.173 "sock_priority": 0, 00:24:18.173 "abort_timeout_sec": 1, 00:24:18.173 "ack_timeout": 0, 00:24:18.173 "data_wr_pool_size": 0 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "nvmf_create_subsystem", 00:24:18.173 "params": { 00:24:18.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.173 "allow_any_host": false, 00:24:18.173 "serial_number": "00000000000000000000", 00:24:18.173 "model_number": "SPDK bdev Controller", 00:24:18.173 "max_namespaces": 32, 00:24:18.173 "min_cntlid": 1, 00:24:18.173 "max_cntlid": 65519, 00:24:18.173 "ana_reporting": false 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "nvmf_subsystem_add_host", 00:24:18.173 "params": { 00:24:18.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.173 "host": "nqn.2016-06.io.spdk:host1", 00:24:18.173 "psk": "key0" 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "nvmf_subsystem_add_ns", 00:24:18.173 "params": { 00:24:18.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.173 "namespace": { 00:24:18.173 "nsid": 1, 00:24:18.173 "bdev_name": "malloc0", 00:24:18.173 "nguid": "1D7B20A9CA7F4DC1A0F90075CB981A82", 00:24:18.173 "uuid": "1d7b20a9-ca7f-4dc1-a0f9-0075cb981a82", 00:24:18.173 "no_auto_visible": false 00:24:18.173 } 00:24:18.173 } 00:24:18.173 }, 00:24:18.173 { 00:24:18.173 "method": "nvmf_subsystem_add_listener", 00:24:18.173 "params": { 00:24:18.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.173 "listen_address": { 00:24:18.173 "trtype": "TCP", 00:24:18.173 "adrfam": "IPv4", 00:24:18.173 "traddr": "10.0.0.2", 00:24:18.173 "trsvcid": "4420" 00:24:18.173 }, 00:24:18.173 "secure_channel": false, 00:24:18.173 "sock_impl": "ssl" 00:24:18.173 } 00:24:18.173 } 00:24:18.173 ] 00:24:18.173 } 00:24:18.173 ] 00:24:18.173 }' 00:24:18.173 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:18.431 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:18.431 "subsystems": [ 00:24:18.431 { 00:24:18.431 "subsystem": "keyring", 00:24:18.431 "config": [ 00:24:18.431 { 00:24:18.431 "method": "keyring_file_add_key", 00:24:18.431 "params": { 00:24:18.431 "name": "key0", 00:24:18.431 "path": "/tmp/tmp.EtYodpPzOo" 00:24:18.432 } 00:24:18.432 } 00:24:18.432 ] 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "subsystem": "iobuf", 00:24:18.432 "config": [ 00:24:18.432 { 00:24:18.432 "method": "iobuf_set_options", 00:24:18.432 "params": { 00:24:18.432 "small_pool_count": 8192, 00:24:18.432 "large_pool_count": 1024, 00:24:18.432 "small_bufsize": 8192, 00:24:18.432 "large_bufsize": 135168, 00:24:18.432 "enable_numa": false 00:24:18.432 } 00:24:18.432 } 00:24:18.432 ] 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "subsystem": "sock", 00:24:18.432 "config": [ 00:24:18.432 { 00:24:18.432 "method": "sock_set_default_impl", 00:24:18.432 "params": { 00:24:18.432 "impl_name": "posix" 00:24:18.432 } 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "method": "sock_impl_set_options", 00:24:18.432 "params": { 00:24:18.432 "impl_name": "ssl", 00:24:18.432 "recv_buf_size": 4096, 00:24:18.432 "send_buf_size": 4096, 00:24:18.432 "enable_recv_pipe": true, 00:24:18.432 "enable_quickack": false, 00:24:18.432 "enable_placement_id": 0, 00:24:18.432 "enable_zerocopy_send_server": true, 00:24:18.432 "enable_zerocopy_send_client": false, 00:24:18.432 "zerocopy_threshold": 0, 00:24:18.432 "tls_version": 0, 00:24:18.432 "enable_ktls": false 00:24:18.432 } 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "method": "sock_impl_set_options", 00:24:18.432 "params": { 00:24:18.432 "impl_name": "posix", 00:24:18.432 "recv_buf_size": 2097152, 00:24:18.432 "send_buf_size": 2097152, 00:24:18.432 "enable_recv_pipe": true, 00:24:18.432 "enable_quickack": false, 00:24:18.432 "enable_placement_id": 0, 00:24:18.432 "enable_zerocopy_send_server": true, 00:24:18.432 "enable_zerocopy_send_client": false, 00:24:18.432 "zerocopy_threshold": 0, 00:24:18.432 "tls_version": 0, 00:24:18.432 "enable_ktls": false 00:24:18.432 } 00:24:18.432 } 00:24:18.432 ] 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "subsystem": "vmd", 00:24:18.432 "config": [] 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "subsystem": "accel", 00:24:18.432 "config": [ 00:24:18.432 { 00:24:18.432 "method": "accel_set_options", 00:24:18.432 "params": { 00:24:18.432 "small_cache_size": 128, 00:24:18.432 "large_cache_size": 16, 00:24:18.432 "task_count": 2048, 00:24:18.432 "sequence_count": 2048, 00:24:18.432 "buf_count": 2048 00:24:18.432 } 00:24:18.432 } 00:24:18.432 ] 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "subsystem": "bdev", 00:24:18.432 "config": [ 00:24:18.432 { 00:24:18.432 "method": "bdev_set_options", 00:24:18.432 "params": { 00:24:18.432 "bdev_io_pool_size": 65535, 00:24:18.432 "bdev_io_cache_size": 256, 00:24:18.432 "bdev_auto_examine": true, 00:24:18.432 "iobuf_small_cache_size": 128, 00:24:18.432 "iobuf_large_cache_size": 16 00:24:18.432 } 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "method": "bdev_raid_set_options", 00:24:18.432 "params": { 00:24:18.432 "process_window_size_kb": 1024, 00:24:18.432 "process_max_bandwidth_mb_sec": 0 00:24:18.432 } 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "method": "bdev_iscsi_set_options", 00:24:18.432 "params": { 00:24:18.432 "timeout_sec": 30 00:24:18.432 } 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "method": "bdev_nvme_set_options", 00:24:18.432 "params": { 00:24:18.432 "action_on_timeout": "none", 00:24:18.432 "timeout_us": 0, 00:24:18.432 "timeout_admin_us": 0, 00:24:18.432 "keep_alive_timeout_ms": 10000, 00:24:18.432 "arbitration_burst": 0, 00:24:18.432 "low_priority_weight": 0, 00:24:18.432 "medium_priority_weight": 0, 00:24:18.432 "high_priority_weight": 0, 00:24:18.432 "nvme_adminq_poll_period_us": 10000, 00:24:18.432 "nvme_ioq_poll_period_us": 0, 00:24:18.432 "io_queue_requests": 512, 00:24:18.432 "delay_cmd_submit": true, 00:24:18.432 "transport_retry_count": 4, 00:24:18.432 "bdev_retry_count": 3, 00:24:18.432 "transport_ack_timeout": 0, 00:24:18.432 "ctrlr_loss_timeout_sec": 0, 00:24:18.432 "reconnect_delay_sec": 0, 00:24:18.432 "fast_io_fail_timeout_sec": 0, 00:24:18.432 "disable_auto_failback": false, 00:24:18.432 "generate_uuids": false, 00:24:18.432 "transport_tos": 0, 00:24:18.432 "nvme_error_stat": false, 00:24:18.432 "rdma_srq_size": 0, 00:24:18.432 "io_path_stat": false, 00:24:18.432 "allow_accel_sequence": false, 00:24:18.432 "rdma_max_cq_size": 0, 00:24:18.432 "rdma_cm_event_timeout_ms": 0, 00:24:18.432 "dhchap_digests": [ 00:24:18.432 "sha256", 00:24:18.432 "sha384", 00:24:18.432 "sha512" 00:24:18.432 ], 00:24:18.432 "dhchap_dhgroups": [ 00:24:18.432 "null", 00:24:18.432 "ffdhe2048", 00:24:18.432 "ffdhe3072", 00:24:18.432 "ffdhe4096", 00:24:18.432 "ffdhe6144", 00:24:18.432 "ffdhe8192" 00:24:18.432 ] 00:24:18.432 } 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "method": "bdev_nvme_attach_controller", 00:24:18.432 "params": { 00:24:18.432 "name": "nvme0", 00:24:18.432 "trtype": "TCP", 00:24:18.432 "adrfam": "IPv4", 00:24:18.432 "traddr": "10.0.0.2", 00:24:18.432 "trsvcid": "4420", 00:24:18.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.432 "prchk_reftag": false, 00:24:18.432 "prchk_guard": false, 00:24:18.432 "ctrlr_loss_timeout_sec": 0, 00:24:18.432 "reconnect_delay_sec": 0, 00:24:18.432 "fast_io_fail_timeout_sec": 0, 00:24:18.432 "psk": "key0", 00:24:18.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.432 "hdgst": false, 00:24:18.432 "ddgst": false, 00:24:18.432 "multipath": "multipath" 00:24:18.432 } 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "method": "bdev_nvme_set_hotplug", 00:24:18.432 "params": { 00:24:18.432 "period_us": 100000, 00:24:18.432 "enable": false 00:24:18.432 } 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "method": "bdev_enable_histogram", 00:24:18.432 "params": { 00:24:18.432 "name": "nvme0n1", 00:24:18.432 "enable": true 00:24:18.432 } 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "method": "bdev_wait_for_examine" 00:24:18.432 } 00:24:18.432 ] 00:24:18.432 }, 00:24:18.432 { 00:24:18.432 "subsystem": "nbd", 00:24:18.432 "config": [] 00:24:18.432 } 00:24:18.432 ] 00:24:18.432 }' 00:24:18.432 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 219345 00:24:18.432 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 219345 ']' 00:24:18.432 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 219345 00:24:18.432 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:18.432 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.432 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 219345 00:24:18.432 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:18.432 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:18.432 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 219345' 00:24:18.432 killing process with pid 219345 00:24:18.432 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 219345 00:24:18.432 Received shutdown signal, test time was about 1.000000 seconds 00:24:18.432 00:24:18.432 Latency(us) 00:24:18.432 [2024-11-19T22:48:52.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.433 [2024-11-19T22:48:52.745Z] =================================================================================================================== 00:24:18.433 [2024-11-19T22:48:52.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.433 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 219345 00:24:18.691 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 219206 00:24:18.691 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 219206 ']' 00:24:18.691 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 219206 00:24:18.691 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:18.691 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.691 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 219206 00:24:18.691 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:18.691 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:18.691 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 219206' 00:24:18.691 killing process with pid 219206 00:24:18.691 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 219206 00:24:18.691 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 219206 00:24:18.951 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:18.951 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:18.951 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:18.951 "subsystems": [ 00:24:18.951 { 00:24:18.951 "subsystem": "keyring", 00:24:18.951 "config": [ 00:24:18.951 { 00:24:18.951 "method": "keyring_file_add_key", 00:24:18.951 "params": { 00:24:18.951 "name": "key0", 00:24:18.951 "path": "/tmp/tmp.EtYodpPzOo" 00:24:18.951 } 00:24:18.951 } 00:24:18.951 ] 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "subsystem": "iobuf", 00:24:18.951 "config": [ 00:24:18.951 { 00:24:18.951 "method": "iobuf_set_options", 00:24:18.951 "params": { 00:24:18.951 "small_pool_count": 8192, 00:24:18.951 "large_pool_count": 1024, 00:24:18.951 "small_bufsize": 8192, 00:24:18.951 "large_bufsize": 135168, 00:24:18.951 "enable_numa": false 00:24:18.951 } 00:24:18.951 } 00:24:18.951 ] 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "subsystem": "sock", 00:24:18.951 "config": [ 00:24:18.951 { 00:24:18.951 "method": "sock_set_default_impl", 00:24:18.951 "params": { 00:24:18.951 "impl_name": "posix" 00:24:18.951 } 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "method": "sock_impl_set_options", 00:24:18.951 "params": { 00:24:18.951 "impl_name": "ssl", 00:24:18.951 "recv_buf_size": 4096, 00:24:18.951 "send_buf_size": 4096, 00:24:18.951 "enable_recv_pipe": true, 00:24:18.951 "enable_quickack": false, 00:24:18.951 "enable_placement_id": 0, 00:24:18.951 "enable_zerocopy_send_server": true, 00:24:18.951 "enable_zerocopy_send_client": false, 00:24:18.951 "zerocopy_threshold": 0, 00:24:18.951 "tls_version": 0, 00:24:18.951 "enable_ktls": false 00:24:18.951 } 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "method": "sock_impl_set_options", 00:24:18.951 "params": { 00:24:18.951 "impl_name": "posix", 00:24:18.951 "recv_buf_size": 2097152, 00:24:18.951 "send_buf_size": 2097152, 00:24:18.951 "enable_recv_pipe": true, 00:24:18.951 "enable_quickack": false, 00:24:18.951 "enable_placement_id": 0, 00:24:18.951 "enable_zerocopy_send_server": true, 00:24:18.951 "enable_zerocopy_send_client": false, 00:24:18.951 "zerocopy_threshold": 0, 00:24:18.951 "tls_version": 0, 00:24:18.951 "enable_ktls": false 00:24:18.951 } 00:24:18.951 } 00:24:18.951 ] 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "subsystem": "vmd", 00:24:18.951 "config": [] 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "subsystem": "accel", 00:24:18.951 "config": [ 00:24:18.951 { 00:24:18.951 "method": "accel_set_options", 00:24:18.951 "params": { 00:24:18.951 "small_cache_size": 128, 00:24:18.951 "large_cache_size": 16, 00:24:18.951 "task_count": 2048, 00:24:18.951 "sequence_count": 2048, 00:24:18.951 "buf_count": 2048 00:24:18.951 } 00:24:18.951 } 00:24:18.951 ] 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "subsystem": "bdev", 00:24:18.951 "config": [ 00:24:18.951 { 00:24:18.951 "method": "bdev_set_options", 00:24:18.951 "params": { 00:24:18.951 "bdev_io_pool_size": 65535, 00:24:18.951 "bdev_io_cache_size": 256, 00:24:18.951 "bdev_auto_examine": true, 00:24:18.951 "iobuf_small_cache_size": 128, 00:24:18.951 "iobuf_large_cache_size": 16 00:24:18.951 } 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "method": "bdev_raid_set_options", 00:24:18.951 "params": { 00:24:18.951 "process_window_size_kb": 1024, 00:24:18.951 "process_max_bandwidth_mb_sec": 0 00:24:18.951 } 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "method": "bdev_iscsi_set_options", 00:24:18.951 "params": { 00:24:18.951 "timeout_sec": 30 00:24:18.951 } 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "method": "bdev_nvme_set_options", 00:24:18.951 "params": { 00:24:18.951 "action_on_timeout": "none", 00:24:18.951 "timeout_us": 0, 00:24:18.951 "timeout_admin_us": 0, 00:24:18.951 "keep_alive_timeout_ms": 10000, 00:24:18.951 "arbitration_burst": 0, 00:24:18.951 "low_priority_weight": 0, 00:24:18.951 "medium_priority_weight": 0, 00:24:18.951 "high_priority_weight": 0, 00:24:18.951 "nvme_adminq_poll_period_us": 10000, 00:24:18.951 "nvme_ioq_poll_period_us": 0, 00:24:18.951 "io_queue_requests": 0, 00:24:18.951 "delay_cmd_submit": true, 00:24:18.951 "transport_retry_count": 4, 00:24:18.951 "bdev_retry_count": 3, 00:24:18.951 "transport_ack_timeout": 0, 00:24:18.951 "ctrlr_loss_timeout_sec": 0, 00:24:18.951 "reconnect_delay_sec": 0, 00:24:18.951 "fast_io_fail_timeout_sec": 0, 00:24:18.951 "disable_auto_failback": false, 00:24:18.951 "generate_uuids": false, 00:24:18.951 "transport_tos": 0, 00:24:18.951 "nvme_error_stat": false, 00:24:18.951 "rdma_srq_size": 0, 00:24:18.951 "io_path_stat": false, 00:24:18.951 "allow_accel_sequence": false, 00:24:18.951 "rdma_max_cq_size": 0, 00:24:18.951 "rdma_cm_event_timeout_ms": 0, 00:24:18.951 "dhchap_digests": [ 00:24:18.951 "sha256", 00:24:18.951 "sha384", 00:24:18.951 "sha512" 00:24:18.951 ], 00:24:18.951 "dhchap_dhgroups": [ 00:24:18.951 "null", 00:24:18.951 "ffdhe2048", 00:24:18.951 "ffdhe3072", 00:24:18.951 "ffdhe4096", 00:24:18.951 "ffdhe6144", 00:24:18.951 "ffdhe8192" 00:24:18.951 ] 00:24:18.951 } 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "method": "bdev_nvme_set_hotplug", 00:24:18.951 "params": { 00:24:18.951 "period_us": 100000, 00:24:18.951 "enable": false 00:24:18.951 } 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "method": "bdev_malloc_create", 00:24:18.951 "params": { 00:24:18.951 "name": "malloc0", 00:24:18.951 "num_blocks": 8192, 00:24:18.951 "block_size": 4096, 00:24:18.951 "physical_block_size": 4096, 00:24:18.951 "uuid": "1d7b20a9-ca7f-4dc1-a0f9-0075cb981a82", 00:24:18.951 "optimal_io_boundary": 0, 00:24:18.951 "md_size": 0, 00:24:18.951 "dif_type": 0, 00:24:18.951 "dif_is_head_of_md": false, 00:24:18.951 "dif_pi_format": 0 00:24:18.951 } 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "method": "bdev_wait_for_examine" 00:24:18.951 } 00:24:18.951 ] 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "subsystem": "nbd", 00:24:18.951 "config": [] 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "subsystem": "scheduler", 00:24:18.951 "config": [ 00:24:18.951 { 00:24:18.951 "method": "framework_set_scheduler", 00:24:18.951 "params": { 00:24:18.951 "name": "static" 00:24:18.951 } 00:24:18.951 } 00:24:18.951 ] 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "subsystem": "nvmf", 00:24:18.951 "config": [ 00:24:18.951 { 00:24:18.951 "method": "nvmf_set_config", 00:24:18.951 "params": { 00:24:18.951 "discovery_filter": "match_any", 00:24:18.951 "admin_cmd_passthru": { 00:24:18.951 "identify_ctrlr": false 00:24:18.951 }, 00:24:18.951 "dhchap_digests": [ 00:24:18.951 "sha256", 00:24:18.951 "sha384", 00:24:18.951 "sha512" 00:24:18.951 ], 00:24:18.951 "dhchap_dhgroups": [ 00:24:18.951 "null", 00:24:18.951 "ffdhe2048", 00:24:18.951 "ffdhe3072", 00:24:18.951 "ffdhe4096", 00:24:18.951 "ffdhe6144", 00:24:18.951 "ffdhe8192" 00:24:18.951 ] 00:24:18.951 } 00:24:18.951 }, 00:24:18.951 { 00:24:18.951 "method": "nvmf_set_max_subsystems", 00:24:18.951 "params": { 00:24:18.951 "max_subsystems": 1024 00:24:18.951 } 00:24:18.951 }, 00:24:18.951 { 00:24:18.952 "method": "nvmf_set_crdt", 00:24:18.952 "params": { 00:24:18.952 "crdt1": 0, 00:24:18.952 "crdt2": 0, 00:24:18.952 "crdt3": 0 00:24:18.952 } 00:24:18.952 }, 00:24:18.952 { 00:24:18.952 "method": "nvmf_create_transport", 00:24:18.952 "params": { 00:24:18.952 "trtype": "TCP", 00:24:18.952 "max_queue_depth": 128, 00:24:18.952 "max_io_qpairs_per_ctrlr": 127, 00:24:18.952 "in_capsule_data_size": 4096, 00:24:18.952 "max_io_size": 131072, 00:24:18.952 "io_unit_size": 131072, 00:24:18.952 "max_aq_depth": 128, 00:24:18.952 "num_shared_buffers": 511, 00:24:18.952 "buf_cache_size": 4294967295, 00:24:18.952 "dif_insert_or_strip": false, 00:24:18.952 "zcopy": false, 00:24:18.952 "c2h_success": false, 00:24:18.952 "sock_priority": 0, 00:24:18.952 "abort_timeout_sec": 1, 00:24:18.952 "ack_timeout": 0, 00:24:18.952 "data_wr_pool_size": 0 00:24:18.952 } 00:24:18.952 }, 00:24:18.952 { 00:24:18.952 "method": "nvmf_create_subsystem", 00:24:18.952 "params": { 00:24:18.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.952 "allow_any_host": false, 00:24:18.952 "serial_number": "00000000000000000000", 00:24:18.952 "model_number": "SPDK bdev Controller", 00:24:18.952 "max_namespaces": 32, 00:24:18.952 "min_cntlid": 1, 00:24:18.952 "max_cntlid": 65519, 00:24:18.952 "ana_reporting": false 00:24:18.952 } 00:24:18.952 }, 00:24:18.952 { 00:24:18.952 "method": "nvmf_subsystem_add_host", 00:24:18.952 "params": { 00:24:18.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.952 "host": "nqn.2016-06.io.spdk:host1", 00:24:18.952 "psk": "key0" 00:24:18.952 } 00:24:18.952 }, 00:24:18.952 { 00:24:18.952 "method": "nvmf_subsystem_add_ns", 00:24:18.952 "params": { 00:24:18.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.952 "namespace": { 00:24:18.952 "nsid": 1, 00:24:18.952 "bdev_name": "malloc0", 00:24:18.952 "nguid": "1D7B20A9CA7F4DC1A0F90075CB981A82", 00:24:18.952 "uuid": "1d7b20a9-ca7f-4dc1-a0f9-0075cb981a82", 00:24:18.952 "no_auto_visible": false 00:24:18.952 } 00:24:18.952 } 00:24:18.952 }, 00:24:18.952 { 00:24:18.952 "method": "nvmf_subsystem_add_listener", 00:24:18.952 "params": { 00:24:18.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.952 "listen_address": { 00:24:18.952 "trtype": "TCP", 00:24:18.952 "adrfam": "IPv4", 00:24:18.952 "traddr": "10.0.0.2", 00:24:18.952 "trsvcid": "4420" 00:24:18.952 }, 00:24:18.952 "secure_channel": false, 00:24:18.952 "sock_impl": "ssl" 00:24:18.952 } 00:24:18.952 } 00:24:18.952 ] 00:24:18.952 } 00:24:18.952 ] 00:24:18.952 }' 00:24:18.952 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.952 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.952 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=219636 00:24:18.952 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:18.952 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 219636 00:24:18.952 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 219636 ']' 00:24:18.952 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.952 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.952 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.952 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.952 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.952 [2024-11-19 23:48:53.248966] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:24:18.952 [2024-11-19 23:48:53.249056] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.211 [2024-11-19 23:48:53.326976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.211 [2024-11-19 23:48:53.373099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.211 [2024-11-19 23:48:53.373170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.211 [2024-11-19 23:48:53.373186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.211 [2024-11-19 23:48:53.373200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.211 [2024-11-19 23:48:53.373212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.211 [2024-11-19 23:48:53.373918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.469 [2024-11-19 23:48:53.617376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.469 [2024-11-19 23:48:53.649391] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.469 [2024-11-19 23:48:53.649659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=219790 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 219790 /var/tmp/bdevperf.sock 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 219790 ']' 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.035 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:20.035 "subsystems": [ 00:24:20.035 { 00:24:20.035 "subsystem": "keyring", 00:24:20.035 "config": [ 00:24:20.035 { 00:24:20.035 "method": "keyring_file_add_key", 00:24:20.035 "params": { 00:24:20.035 "name": "key0", 00:24:20.035 "path": "/tmp/tmp.EtYodpPzOo" 00:24:20.035 } 00:24:20.035 } 00:24:20.035 ] 00:24:20.035 }, 00:24:20.035 { 00:24:20.035 "subsystem": "iobuf", 00:24:20.035 "config": [ 00:24:20.035 { 00:24:20.035 "method": "iobuf_set_options", 00:24:20.035 "params": { 00:24:20.035 "small_pool_count": 8192, 00:24:20.035 "large_pool_count": 1024, 00:24:20.035 "small_bufsize": 8192, 00:24:20.035 "large_bufsize": 135168, 00:24:20.035 "enable_numa": false 00:24:20.035 } 00:24:20.035 } 00:24:20.035 ] 00:24:20.035 }, 00:24:20.035 { 00:24:20.035 "subsystem": "sock", 00:24:20.035 "config": [ 00:24:20.035 { 00:24:20.035 "method": "sock_set_default_impl", 00:24:20.035 "params": { 00:24:20.035 "impl_name": "posix" 00:24:20.035 } 00:24:20.035 }, 00:24:20.035 { 00:24:20.035 "method": "sock_impl_set_options", 00:24:20.035 "params": { 00:24:20.035 "impl_name": "ssl", 00:24:20.035 "recv_buf_size": 4096, 00:24:20.035 "send_buf_size": 4096, 00:24:20.035 "enable_recv_pipe": true, 00:24:20.035 "enable_quickack": false, 00:24:20.035 "enable_placement_id": 0, 00:24:20.035 "enable_zerocopy_send_server": true, 00:24:20.035 "enable_zerocopy_send_client": false, 00:24:20.035 "zerocopy_threshold": 0, 00:24:20.035 "tls_version": 0, 00:24:20.035 "enable_ktls": false 00:24:20.035 } 00:24:20.035 }, 00:24:20.035 { 00:24:20.035 "method": "sock_impl_set_options", 00:24:20.035 "params": { 00:24:20.035 "impl_name": "posix", 00:24:20.035 "recv_buf_size": 2097152, 00:24:20.035 "send_buf_size": 2097152, 00:24:20.035 "enable_recv_pipe": true, 00:24:20.035 "enable_quickack": false, 00:24:20.035 "enable_placement_id": 0, 00:24:20.035 "enable_zerocopy_send_server": true, 00:24:20.035 "enable_zerocopy_send_client": false, 00:24:20.035 "zerocopy_threshold": 0, 00:24:20.035 "tls_version": 0, 00:24:20.035 "enable_ktls": false 00:24:20.035 } 00:24:20.035 } 00:24:20.035 ] 00:24:20.035 }, 00:24:20.035 { 00:24:20.035 "subsystem": "vmd", 00:24:20.035 "config": [] 00:24:20.035 }, 00:24:20.035 { 00:24:20.035 "subsystem": "accel", 00:24:20.035 "config": [ 00:24:20.035 { 00:24:20.035 "method": "accel_set_options", 00:24:20.035 "params": { 00:24:20.035 "small_cache_size": 128, 00:24:20.035 "large_cache_size": 16, 00:24:20.035 "task_count": 2048, 00:24:20.035 "sequence_count": 2048, 00:24:20.035 "buf_count": 2048 00:24:20.035 } 00:24:20.035 } 00:24:20.035 ] 00:24:20.035 }, 00:24:20.035 { 00:24:20.035 "subsystem": "bdev", 00:24:20.035 "config": [ 00:24:20.035 { 00:24:20.035 "method": "bdev_set_options", 00:24:20.035 "params": { 00:24:20.035 "bdev_io_pool_size": 65535, 00:24:20.036 "bdev_io_cache_size": 256, 00:24:20.036 "bdev_auto_examine": true, 00:24:20.036 "iobuf_small_cache_size": 128, 00:24:20.036 "iobuf_large_cache_size": 16 00:24:20.036 } 00:24:20.036 }, 00:24:20.036 { 00:24:20.036 "method": "bdev_raid_set_options", 00:24:20.036 "params": { 00:24:20.036 "process_window_size_kb": 1024, 00:24:20.036 "process_max_bandwidth_mb_sec": 0 00:24:20.036 } 00:24:20.036 }, 00:24:20.036 { 00:24:20.036 "method": "bdev_iscsi_set_options", 00:24:20.036 "params": { 00:24:20.036 "timeout_sec": 30 00:24:20.036 } 00:24:20.036 }, 00:24:20.036 { 00:24:20.036 "method": "bdev_nvme_set_options", 00:24:20.036 "params": { 00:24:20.036 "action_on_timeout": "none", 00:24:20.036 "timeout_us": 0, 00:24:20.036 "timeout_admin_us": 0, 00:24:20.036 "keep_alive_timeout_ms": 10000, 00:24:20.036 "arbitration_burst": 0, 00:24:20.036 "low_priority_weight": 0, 00:24:20.036 "medium_priority_weight": 0, 00:24:20.036 "high_priority_weight": 0, 00:24:20.036 "nvme_adminq_poll_period_us": 10000, 00:24:20.036 "nvme_ioq_poll_period_us": 0, 00:24:20.036 "io_queue_requests": 512, 00:24:20.036 "delay_cmd_submit": true, 00:24:20.036 "transport_retry_count": 4, 00:24:20.036 "bdev_retry_count": 3, 00:24:20.036 "transport_ack_timeout": 0, 00:24:20.036 "ctrlr_loss_timeout_sec": 0, 00:24:20.036 "reconnect_delay_sec": 0, 00:24:20.036 "fast_io_fail_timeout_sec": 0, 00:24:20.036 "disable_auto_failback": false, 00:24:20.036 "generate_uuids": false, 00:24:20.036 "transport_tos": 0, 00:24:20.036 "nvme_error_stat": false, 00:24:20.036 "rdma_srq_size": 0, 00:24:20.036 "io_path_stat": false, 00:24:20.036 "allow_accel_sequence": false, 00:24:20.036 "rdma_max_cq_size": 0, 00:24:20.036 "rdma_cm_event_timeout_ms": 0, 00:24:20.036 "dhchap_digests": [ 00:24:20.036 "sha256", 00:24:20.036 "sha384", 00:24:20.036 "sha512" 00:24:20.036 ], 00:24:20.036 "dhchap_dhgroups": [ 00:24:20.036 "null", 00:24:20.036 "ffdhe2048", 00:24:20.036 "ffdhe3072", 00:24:20.036 "ffdhe4096", 00:24:20.036 "ffdhe6144", 00:24:20.036 "ffdhe8192" 00:24:20.036 ] 00:24:20.036 } 00:24:20.036 }, 00:24:20.036 { 00:24:20.036 "method": "bdev_nvme_attach_controller", 00:24:20.036 "params": { 00:24:20.036 "name": "nvme0", 00:24:20.036 "trtype": "TCP", 00:24:20.036 "adrfam": "IPv4", 00:24:20.036 "traddr": "10.0.0.2", 00:24:20.036 "trsvcid": "4420", 00:24:20.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.036 "prchk_reftag": false, 00:24:20.036 "prchk_guard": false, 00:24:20.036 "ctrlr_loss_timeout_sec": 0, 00:24:20.036 "reconnect_delay_sec": 0, 00:24:20.036 "fast_io_fail_timeout_sec": 0, 00:24:20.036 "psk": "key0", 00:24:20.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:20.036 "hdgst": false, 00:24:20.036 "ddgst": false, 00:24:20.036 "multipath": "multipath" 00:24:20.036 } 00:24:20.036 }, 00:24:20.036 { 00:24:20.036 "method": "bdev_nvme_set_hotplug", 00:24:20.036 "params": { 00:24:20.036 "period_us": 100000, 00:24:20.036 "enable": false 00:24:20.036 } 00:24:20.036 }, 00:24:20.036 { 00:24:20.036 "method": "bdev_enable_histogram", 00:24:20.036 "params": { 00:24:20.036 "name": "nvme0n1", 00:24:20.036 "enable": true 00:24:20.036 } 00:24:20.036 }, 00:24:20.036 { 00:24:20.036 "method": "bdev_wait_for_examine" 00:24:20.036 } 00:24:20.036 ] 00:24:20.036 }, 00:24:20.036 { 00:24:20.036 "subsystem": "nbd", 00:24:20.036 "config": [] 00:24:20.036 } 00:24:20.036 ] 00:24:20.036 }' 00:24:20.036 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.036 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.036 23:48:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.036 [2024-11-19 23:48:54.312004] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:24:20.036 [2024-11-19 23:48:54.312097] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid219790 ] 00:24:20.294 [2024-11-19 23:48:54.384478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.294 [2024-11-19 23:48:54.435304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.552 [2024-11-19 23:48:54.618314] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.118 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.118 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:21.118 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:21.118 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:21.376 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.376 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:21.376 Running I/O for 1 seconds... 00:24:22.751 3121.00 IOPS, 12.19 MiB/s 00:24:22.751 Latency(us) 00:24:22.751 [2024-11-19T22:48:57.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.751 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:22.751 Verification LBA range: start 0x0 length 0x2000 00:24:22.751 nvme0n1 : 1.02 3175.88 12.41 0.00 0.00 39909.45 8204.14 35535.08 00:24:22.751 [2024-11-19T22:48:57.063Z] =================================================================================================================== 00:24:22.751 [2024-11-19T22:48:57.063Z] Total : 3175.88 12.41 0.00 0.00 39909.45 8204.14 35535.08 00:24:22.751 { 00:24:22.751 "results": [ 00:24:22.751 { 00:24:22.751 "job": "nvme0n1", 00:24:22.751 "core_mask": "0x2", 00:24:22.751 "workload": "verify", 00:24:22.751 "status": "finished", 00:24:22.751 "verify_range": { 00:24:22.751 "start": 0, 00:24:22.751 "length": 8192 00:24:22.751 }, 00:24:22.751 "queue_depth": 128, 00:24:22.751 "io_size": 4096, 00:24:22.751 "runtime": 1.023022, 00:24:22.751 "iops": 3175.884780581454, 00:24:22.751 "mibps": 12.405799924146304, 00:24:22.751 "io_failed": 0, 00:24:22.751 "io_timeout": 0, 00:24:22.751 "avg_latency_us": 39909.44648222245, 00:24:22.751 "min_latency_us": 8204.136296296296, 00:24:22.751 "max_latency_us": 35535.07555555556 00:24:22.751 } 00:24:22.751 ], 00:24:22.751 "core_count": 1 00:24:22.751 } 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:22.751 nvmf_trace.0 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 219790 00:24:22.751 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 219790 ']' 00:24:22.752 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 219790 00:24:22.752 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:22.752 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.752 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 219790 00:24:22.752 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:22.752 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:22.752 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 219790' 00:24:22.752 killing process with pid 219790 00:24:22.752 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 219790 00:24:22.752 Received shutdown signal, test time was about 1.000000 seconds 00:24:22.752 00:24:22.752 Latency(us) 00:24:22.752 [2024-11-19T22:48:57.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.752 [2024-11-19T22:48:57.064Z] =================================================================================================================== 00:24:22.752 [2024-11-19T22:48:57.064Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.752 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 219790 00:24:22.752 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:22.752 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:22.752 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:22.752 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:22.752 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:22.752 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:22.752 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:22.752 rmmod nvme_tcp 00:24:23.010 rmmod nvme_fabrics 00:24:23.010 rmmod nvme_keyring 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 219636 ']' 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 219636 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 219636 ']' 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 219636 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 219636 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 219636' 00:24:23.010 killing process with pid 219636 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 219636 00:24:23.010 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 219636 00:24:23.269 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:23.269 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:23.269 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:23.269 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:23.269 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:23.269 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.269 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.269 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.269 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.269 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.269 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.269 23:48:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.170 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:25.170 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.6GhmIg1C8C /tmp/tmp.qhXyBkWne4 /tmp/tmp.EtYodpPzOo 00:24:25.170 00:24:25.170 real 1m22.771s 00:24:25.170 user 2m11.334s 00:24:25.170 sys 0m27.853s 00:24:25.170 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.170 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.170 ************************************ 00:24:25.170 END TEST nvmf_tls 00:24:25.170 ************************************ 00:24:25.170 23:48:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:25.170 23:48:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:25.170 23:48:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:25.170 23:48:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:25.170 ************************************ 00:24:25.170 START TEST nvmf_fips 00:24:25.170 ************************************ 00:24:25.170 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:25.429 * Looking for test storage... 00:24:25.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:25.429 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:25.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.430 --rc genhtml_branch_coverage=1 00:24:25.430 --rc genhtml_function_coverage=1 00:24:25.430 --rc genhtml_legend=1 00:24:25.430 --rc geninfo_all_blocks=1 00:24:25.430 --rc geninfo_unexecuted_blocks=1 00:24:25.430 00:24:25.430 ' 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:25.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.430 --rc genhtml_branch_coverage=1 00:24:25.430 --rc genhtml_function_coverage=1 00:24:25.430 --rc genhtml_legend=1 00:24:25.430 --rc geninfo_all_blocks=1 00:24:25.430 --rc geninfo_unexecuted_blocks=1 00:24:25.430 00:24:25.430 ' 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:25.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.430 --rc genhtml_branch_coverage=1 00:24:25.430 --rc genhtml_function_coverage=1 00:24:25.430 --rc genhtml_legend=1 00:24:25.430 --rc geninfo_all_blocks=1 00:24:25.430 --rc geninfo_unexecuted_blocks=1 00:24:25.430 00:24:25.430 ' 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:25.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.430 --rc genhtml_branch_coverage=1 00:24:25.430 --rc genhtml_function_coverage=1 00:24:25.430 --rc genhtml_legend=1 00:24:25.430 --rc geninfo_all_blocks=1 00:24:25.430 --rc geninfo_unexecuted_blocks=1 00:24:25.430 00:24:25.430 ' 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:25.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:25.430 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:25.431 Error setting digest 00:24:25.431 40320F88B07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:25.431 40320F88B07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.431 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:25.432 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:25.432 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.432 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:25.432 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:25.432 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:25.432 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.432 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.432 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.432 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:25.432 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:25.432 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:25.432 23:48:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:27.965 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.965 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:27.966 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:27.966 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:27.966 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:24:27.966 00:24:27.966 --- 10.0.0.2 ping statistics --- 00:24:27.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.966 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:24:27.966 00:24:27.966 --- 10.0.0.1 ping statistics --- 00:24:27.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.966 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=222152 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 222152 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 222152 ']' 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.966 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:27.966 [2024-11-19 23:49:02.064603] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:24:27.966 [2024-11-19 23:49:02.064676] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.966 [2024-11-19 23:49:02.139638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.966 [2024-11-19 23:49:02.186847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.966 [2024-11-19 23:49:02.186916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.966 [2024-11-19 23:49:02.186933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.966 [2024-11-19 23:49:02.186947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.966 [2024-11-19 23:49:02.186960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.966 [2024-11-19 23:49:02.187611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.7gq 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.7gq 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.7gq 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.7gq 00:24:28.225 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:28.483 [2024-11-19 23:49:02.649935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.483 [2024-11-19 23:49:02.665950] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:28.483 [2024-11-19 23:49:02.666243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.483 malloc0 00:24:28.483 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:28.483 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=222231 00:24:28.483 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 222231 /var/tmp/bdevperf.sock 00:24:28.483 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:28.483 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 222231 ']' 00:24:28.483 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.483 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.483 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.483 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.483 23:49:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:28.742 [2024-11-19 23:49:02.801286] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:24:28.742 [2024-11-19 23:49:02.801384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222231 ] 00:24:28.742 [2024-11-19 23:49:02.869142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.742 [2024-11-19 23:49:02.915971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.000 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.000 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:29.000 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.7gq 00:24:29.258 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:29.516 [2024-11-19 23:49:03.635134] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:29.516 TLSTESTn1 00:24:29.516 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:29.774 Running I/O for 10 seconds... 00:24:31.641 3422.00 IOPS, 13.37 MiB/s [2024-11-19T22:49:06.886Z] 3506.50 IOPS, 13.70 MiB/s [2024-11-19T22:49:08.259Z] 3525.67 IOPS, 13.77 MiB/s [2024-11-19T22:49:09.193Z] 3525.50 IOPS, 13.77 MiB/s [2024-11-19T22:49:10.123Z] 3532.80 IOPS, 13.80 MiB/s [2024-11-19T22:49:11.057Z] 3502.33 IOPS, 13.68 MiB/s [2024-11-19T22:49:11.990Z] 3485.71 IOPS, 13.62 MiB/s [2024-11-19T22:49:12.925Z] 3490.12 IOPS, 13.63 MiB/s [2024-11-19T22:49:14.299Z] 3490.44 IOPS, 13.63 MiB/s [2024-11-19T22:49:14.299Z] 3493.30 IOPS, 13.65 MiB/s 00:24:39.987 Latency(us) 00:24:39.987 [2024-11-19T22:49:14.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.987 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:39.987 Verification LBA range: start 0x0 length 0x2000 00:24:39.987 TLSTESTn1 : 10.02 3499.23 13.67 0.00 0.00 36519.53 6844.87 27767.85 00:24:39.987 [2024-11-19T22:49:14.299Z] =================================================================================================================== 00:24:39.987 [2024-11-19T22:49:14.299Z] Total : 3499.23 13.67 0.00 0.00 36519.53 6844.87 27767.85 00:24:39.987 { 00:24:39.987 "results": [ 00:24:39.987 { 00:24:39.987 "job": "TLSTESTn1", 00:24:39.987 "core_mask": "0x4", 00:24:39.987 "workload": "verify", 00:24:39.987 "status": "finished", 00:24:39.987 "verify_range": { 00:24:39.987 "start": 0, 00:24:39.987 "length": 8192 00:24:39.987 }, 00:24:39.987 "queue_depth": 128, 00:24:39.987 "io_size": 4096, 00:24:39.987 "runtime": 10.019355, 00:24:39.987 "iops": 3499.2272456660135, 00:24:39.987 "mibps": 13.668856428382865, 00:24:39.987 "io_failed": 0, 00:24:39.987 "io_timeout": 0, 00:24:39.987 "avg_latency_us": 36519.527187509244, 00:24:39.987 "min_latency_us": 6844.8711111111115, 00:24:39.987 "max_latency_us": 27767.845925925925 00:24:39.987 } 00:24:39.987 ], 00:24:39.987 "core_count": 1 00:24:39.987 } 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:39.987 nvmf_trace.0 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 222231 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 222231 ']' 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 222231 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.987 23:49:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 222231 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 222231' 00:24:39.987 killing process with pid 222231 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 222231 00:24:39.987 Received shutdown signal, test time was about 10.000000 seconds 00:24:39.987 00:24:39.987 Latency(us) 00:24:39.987 [2024-11-19T22:49:14.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.987 [2024-11-19T22:49:14.299Z] =================================================================================================================== 00:24:39.987 [2024-11-19T22:49:14.299Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 222231 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:39.987 rmmod nvme_tcp 00:24:39.987 rmmod nvme_fabrics 00:24:39.987 rmmod nvme_keyring 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 222152 ']' 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 222152 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 222152 ']' 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 222152 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.987 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 222152 00:24:40.245 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:40.245 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:40.245 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 222152' 00:24:40.245 killing process with pid 222152 00:24:40.245 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 222152 00:24:40.245 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 222152 00:24:40.246 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:40.246 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:40.246 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:40.246 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:40.246 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:40.246 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:40.246 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:40.246 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:40.246 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:40.246 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.246 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.246 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.7gq 00:24:42.781 00:24:42.781 real 0m17.130s 00:24:42.781 user 0m22.744s 00:24:42.781 sys 0m5.471s 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.781 ************************************ 00:24:42.781 END TEST nvmf_fips 00:24:42.781 ************************************ 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:42.781 ************************************ 00:24:42.781 START TEST nvmf_control_msg_list 00:24:42.781 ************************************ 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:42.781 * Looking for test storage... 00:24:42.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:42.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.781 --rc genhtml_branch_coverage=1 00:24:42.781 --rc genhtml_function_coverage=1 00:24:42.781 --rc genhtml_legend=1 00:24:42.781 --rc geninfo_all_blocks=1 00:24:42.781 --rc geninfo_unexecuted_blocks=1 00:24:42.781 00:24:42.781 ' 00:24:42.781 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:42.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.782 --rc genhtml_branch_coverage=1 00:24:42.782 --rc genhtml_function_coverage=1 00:24:42.782 --rc genhtml_legend=1 00:24:42.782 --rc geninfo_all_blocks=1 00:24:42.782 --rc geninfo_unexecuted_blocks=1 00:24:42.782 00:24:42.782 ' 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:42.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.782 --rc genhtml_branch_coverage=1 00:24:42.782 --rc genhtml_function_coverage=1 00:24:42.782 --rc genhtml_legend=1 00:24:42.782 --rc geninfo_all_blocks=1 00:24:42.782 --rc geninfo_unexecuted_blocks=1 00:24:42.782 00:24:42.782 ' 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:42.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.782 --rc genhtml_branch_coverage=1 00:24:42.782 --rc genhtml_function_coverage=1 00:24:42.782 --rc genhtml_legend=1 00:24:42.782 --rc geninfo_all_blocks=1 00:24:42.782 --rc geninfo_unexecuted_blocks=1 00:24:42.782 00:24:42.782 ' 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:42.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:42.782 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:44.684 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:44.684 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:44.684 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:44.684 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.684 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.685 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.685 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:44.685 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.685 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.685 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:44.685 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:44.685 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.685 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.685 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:44.685 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:44.685 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.685 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.943 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.943 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.943 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:44.943 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.943 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.943 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.943 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:44.943 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:44.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:24:44.943 00:24:44.943 --- 10.0.0.2 ping statistics --- 00:24:44.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.943 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:24:44.944 00:24:44.944 --- 10.0.0.1 ping statistics --- 00:24:44.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.944 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=225555 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 225555 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 225555 ']' 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.944 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:44.944 [2024-11-19 23:49:19.144213] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:24:44.944 [2024-11-19 23:49:19.144321] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.944 [2024-11-19 23:49:19.221347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.214 [2024-11-19 23:49:19.267576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.214 [2024-11-19 23:49:19.267639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.214 [2024-11-19 23:49:19.267668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.214 [2024-11-19 23:49:19.267679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.214 [2024-11-19 23:49:19.267688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.214 [2024-11-19 23:49:19.268262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.214 [2024-11-19 23:49:19.411480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.214 Malloc0 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:45.214 [2024-11-19 23:49:19.452284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=225587 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=225588 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=225589 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:45.214 23:49:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 225587 00:24:45.492 [2024-11-19 23:49:19.531270] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:45.492 [2024-11-19 23:49:19.531561] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:45.492 [2024-11-19 23:49:19.531810] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:46.425 Initializing NVMe Controllers 00:24:46.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:46.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:46.425 Initialization complete. Launching workers. 00:24:46.425 ======================================================== 00:24:46.425 Latency(us) 00:24:46.425 Device Information : IOPS MiB/s Average min max 00:24:46.425 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3894.00 15.21 256.42 168.47 734.22 00:24:46.425 ======================================================== 00:24:46.425 Total : 3894.00 15.21 256.42 168.47 734.22 00:24:46.425 00:24:46.425 Initializing NVMe Controllers 00:24:46.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:46.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:46.425 Initialization complete. Launching workers. 00:24:46.425 ======================================================== 00:24:46.425 Latency(us) 00:24:46.425 Device Information : IOPS MiB/s Average min max 00:24:46.425 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3819.00 14.92 261.38 175.47 597.79 00:24:46.425 ======================================================== 00:24:46.425 Total : 3819.00 14.92 261.38 175.47 597.79 00:24:46.425 00:24:46.425 Initializing NVMe Controllers 00:24:46.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:46.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:46.425 Initialization complete. Launching workers. 00:24:46.425 ======================================================== 00:24:46.425 Latency(us) 00:24:46.425 Device Information : IOPS MiB/s Average min max 00:24:46.425 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 24.00 0.09 41851.55 40928.10 41966.40 00:24:46.425 ======================================================== 00:24:46.425 Total : 24.00 0.09 41851.55 40928.10 41966.40 00:24:46.425 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 225588 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 225589 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.425 rmmod nvme_tcp 00:24:46.425 rmmod nvme_fabrics 00:24:46.425 rmmod nvme_keyring 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 225555 ']' 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 225555 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 225555 ']' 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 225555 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.425 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 225555 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 225555' 00:24:46.684 killing process with pid 225555 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 225555 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 225555 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.684 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.302 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.302 00:24:49.302 real 0m6.392s 00:24:49.302 user 0m5.530s 00:24:49.302 sys 0m2.701s 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:49.303 ************************************ 00:24:49.303 END TEST nvmf_control_msg_list 00:24:49.303 ************************************ 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:49.303 ************************************ 00:24:49.303 START TEST nvmf_wait_for_buf 00:24:49.303 ************************************ 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:49.303 * Looking for test storage... 00:24:49.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:49.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.303 --rc genhtml_branch_coverage=1 00:24:49.303 --rc genhtml_function_coverage=1 00:24:49.303 --rc genhtml_legend=1 00:24:49.303 --rc geninfo_all_blocks=1 00:24:49.303 --rc geninfo_unexecuted_blocks=1 00:24:49.303 00:24:49.303 ' 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:49.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.303 --rc genhtml_branch_coverage=1 00:24:49.303 --rc genhtml_function_coverage=1 00:24:49.303 --rc genhtml_legend=1 00:24:49.303 --rc geninfo_all_blocks=1 00:24:49.303 --rc geninfo_unexecuted_blocks=1 00:24:49.303 00:24:49.303 ' 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:49.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.303 --rc genhtml_branch_coverage=1 00:24:49.303 --rc genhtml_function_coverage=1 00:24:49.303 --rc genhtml_legend=1 00:24:49.303 --rc geninfo_all_blocks=1 00:24:49.303 --rc geninfo_unexecuted_blocks=1 00:24:49.303 00:24:49.303 ' 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:49.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.303 --rc genhtml_branch_coverage=1 00:24:49.303 --rc genhtml_function_coverage=1 00:24:49.303 --rc genhtml_legend=1 00:24:49.303 --rc geninfo_all_blocks=1 00:24:49.303 --rc geninfo_unexecuted_blocks=1 00:24:49.303 00:24:49.303 ' 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.303 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.304 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:51.204 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:51.204 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:51.204 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:51.204 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.204 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:51.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:24:51.205 00:24:51.205 --- 10.0.0.2 ping statistics --- 00:24:51.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.205 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:24:51.205 00:24:51.205 --- 10.0.0.1 ping statistics --- 00:24:51.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.205 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=227669 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 227669 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 227669 ']' 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.205 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:51.205 [2024-11-19 23:49:25.376189] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:24:51.205 [2024-11-19 23:49:25.376282] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.205 [2024-11-19 23:49:25.447953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.205 [2024-11-19 23:49:25.491734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.205 [2024-11-19 23:49:25.491790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.205 [2024-11-19 23:49:25.491813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.205 [2024-11-19 23:49:25.491824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.205 [2024-11-19 23:49:25.491833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.205 [2024-11-19 23:49:25.492430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:51.463 Malloc0 00:24:51.463 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:51.464 [2024-11-19 23:49:25.743986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:51.464 [2024-11-19 23:49:25.768226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.464 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:51.721 [2024-11-19 23:49:25.852941] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:53.095 Initializing NVMe Controllers 00:24:53.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:53.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:53.095 Initialization complete. Launching workers. 00:24:53.095 ======================================================== 00:24:53.095 Latency(us) 00:24:53.095 Device Information : IOPS MiB/s Average min max 00:24:53.095 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 84.00 10.50 49544.13 31908.46 111727.25 00:24:53.095 ======================================================== 00:24:53.095 Total : 84.00 10.50 49544.13 31908.46 111727.25 00:24:53.095 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1318 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1318 -eq 0 ]] 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:53.095 rmmod nvme_tcp 00:24:53.095 rmmod nvme_fabrics 00:24:53.095 rmmod nvme_keyring 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 227669 ']' 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 227669 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 227669 ']' 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 227669 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227669 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227669' 00:24:53.095 killing process with pid 227669 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 227669 00:24:53.095 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 227669 00:24:53.352 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:53.352 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:53.352 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:53.352 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:53.352 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:53.352 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:53.352 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:53.352 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:53.352 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:53.352 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.352 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.352 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:55.887 00:24:55.887 real 0m6.507s 00:24:55.887 user 0m3.033s 00:24:55.887 sys 0m1.877s 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.887 ************************************ 00:24:55.887 END TEST nvmf_wait_for_buf 00:24:55.887 ************************************ 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:55.887 ************************************ 00:24:55.887 START TEST nvmf_fuzz 00:24:55.887 ************************************ 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:55.887 * Looking for test storage... 00:24:55.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:55.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.887 --rc genhtml_branch_coverage=1 00:24:55.887 --rc genhtml_function_coverage=1 00:24:55.887 --rc genhtml_legend=1 00:24:55.887 --rc geninfo_all_blocks=1 00:24:55.887 --rc geninfo_unexecuted_blocks=1 00:24:55.887 00:24:55.887 ' 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:55.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.887 --rc genhtml_branch_coverage=1 00:24:55.887 --rc genhtml_function_coverage=1 00:24:55.887 --rc genhtml_legend=1 00:24:55.887 --rc geninfo_all_blocks=1 00:24:55.887 --rc geninfo_unexecuted_blocks=1 00:24:55.887 00:24:55.887 ' 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:55.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.887 --rc genhtml_branch_coverage=1 00:24:55.887 --rc genhtml_function_coverage=1 00:24:55.887 --rc genhtml_legend=1 00:24:55.887 --rc geninfo_all_blocks=1 00:24:55.887 --rc geninfo_unexecuted_blocks=1 00:24:55.887 00:24:55.887 ' 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:55.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.887 --rc genhtml_branch_coverage=1 00:24:55.887 --rc genhtml_function_coverage=1 00:24:55.887 --rc genhtml_legend=1 00:24:55.887 --rc geninfo_all_blocks=1 00:24:55.887 --rc geninfo_unexecuted_blocks=1 00:24:55.887 00:24:55.887 ' 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.887 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:55.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:55.888 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:57.792 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:57.792 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.792 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:57.793 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:57.793 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.793 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:24:57.793 00:24:57.793 --- 10.0.0.2 ping statistics --- 00:24:57.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.793 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:24:57.793 00:24:57.793 --- 10.0.0.1 ping statistics --- 00:24:57.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.793 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=229877 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 229877 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 229877 ']' 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.793 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.360 Malloc0 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:58.360 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:30.423 Fuzzing completed. Shutting down the fuzz application 00:25:30.423 00:25:30.423 Dumping successful admin opcodes: 00:25:30.423 8, 9, 10, 24, 00:25:30.423 Dumping successful io opcodes: 00:25:30.423 0, 9, 00:25:30.424 NS: 0x2000008eff00 I/O qp, Total commands completed: 455287, total successful commands: 2641, random_seed: 1751714304 00:25:30.424 NS: 0x2000008eff00 admin qp, Total commands completed: 55167, total successful commands: 441, random_seed: 2590864768 00:25:30.424 23:50:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:30.424 Fuzzing completed. Shutting down the fuzz application 00:25:30.424 00:25:30.424 Dumping successful admin opcodes: 00:25:30.424 24, 00:25:30.424 Dumping successful io opcodes: 00:25:30.424 00:25:30.424 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1801216697 00:25:30.424 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1801367353 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:30.424 rmmod nvme_tcp 00:25:30.424 rmmod nvme_fabrics 00:25:30.424 rmmod nvme_keyring 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 229877 ']' 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 229877 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 229877 ']' 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 229877 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 229877 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 229877' 00:25:30.424 killing process with pid 229877 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 229877 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 229877 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.424 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.329 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:32.329 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:32.329 00:25:32.329 real 0m36.903s 00:25:32.329 user 0m50.192s 00:25:32.329 sys 0m15.428s 00:25:32.329 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:32.329 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:32.329 ************************************ 00:25:32.329 END TEST nvmf_fuzz 00:25:32.329 ************************************ 00:25:32.329 23:50:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:32.329 23:50:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:32.329 23:50:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.329 23:50:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:32.329 ************************************ 00:25:32.329 START TEST nvmf_multiconnection 00:25:32.329 ************************************ 00:25:32.329 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:32.329 * Looking for test storage... 00:25:32.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:32.329 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:32.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.590 --rc genhtml_branch_coverage=1 00:25:32.590 --rc genhtml_function_coverage=1 00:25:32.590 --rc genhtml_legend=1 00:25:32.590 --rc geninfo_all_blocks=1 00:25:32.590 --rc geninfo_unexecuted_blocks=1 00:25:32.590 00:25:32.590 ' 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:32.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.590 --rc genhtml_branch_coverage=1 00:25:32.590 --rc genhtml_function_coverage=1 00:25:32.590 --rc genhtml_legend=1 00:25:32.590 --rc geninfo_all_blocks=1 00:25:32.590 --rc geninfo_unexecuted_blocks=1 00:25:32.590 00:25:32.590 ' 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:32.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.590 --rc genhtml_branch_coverage=1 00:25:32.590 --rc genhtml_function_coverage=1 00:25:32.590 --rc genhtml_legend=1 00:25:32.590 --rc geninfo_all_blocks=1 00:25:32.590 --rc geninfo_unexecuted_blocks=1 00:25:32.590 00:25:32.590 ' 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:32.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.590 --rc genhtml_branch_coverage=1 00:25:32.590 --rc genhtml_function_coverage=1 00:25:32.590 --rc genhtml_legend=1 00:25:32.590 --rc geninfo_all_blocks=1 00:25:32.590 --rc geninfo_unexecuted_blocks=1 00:25:32.590 00:25:32.590 ' 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.590 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:32.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:32.591 23:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.123 23:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:35.123 23:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:35.123 23:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:35.123 23:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:35.123 23:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:35.123 23:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:35.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:35.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:35.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:35.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:35.123 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:35.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:35.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:25:35.124 00:25:35.124 --- 10.0.0.2 ping statistics --- 00:25:35.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.124 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:35.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:35.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:25:35.124 00:25:35.124 --- 10.0.0.1 ping statistics --- 00:25:35.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.124 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=235485 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 235485 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 235485 ']' 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.124 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.124 [2024-11-19 23:50:09.217127] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:25:35.124 [2024-11-19 23:50:09.217232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.124 [2024-11-19 23:50:09.297725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:35.124 [2024-11-19 23:50:09.351391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.124 [2024-11-19 23:50:09.351461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.124 [2024-11-19 23:50:09.351487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.124 [2024-11-19 23:50:09.351500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.124 [2024-11-19 23:50:09.351512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.124 [2024-11-19 23:50:09.353250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.124 [2024-11-19 23:50:09.353305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.124 [2024-11-19 23:50:09.353360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:35.124 [2024-11-19 23:50:09.353363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 [2024-11-19 23:50:09.499601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 Malloc1 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 [2024-11-19 23:50:09.562919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 Malloc2 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 Malloc3 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 Malloc4 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.383 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.642 Malloc5 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.642 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 Malloc6 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 Malloc7 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 Malloc8 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 Malloc9 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.643 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.901 Malloc10 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.901 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.901 Malloc11 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.901 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:36.466 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:36.466 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:36.466 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.466 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:36.466 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:38.363 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:38.363 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:38.363 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:38.363 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:38.363 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:38.363 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:38.363 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.363 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:39.295 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:39.295 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:39.295 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.296 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:39.296 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:41.195 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:41.195 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:41.195 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:41.195 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:41.195 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.195 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:41.195 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.195 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:41.761 23:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:41.761 23:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:41.761 23:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.761 23:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:41.761 23:50:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:44.290 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:44.290 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:44.290 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:44.290 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:44.290 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.290 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:44.290 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.290 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:44.547 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:44.547 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:44.547 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:44.547 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:44.547 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:47.072 23:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:47.072 23:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:47.072 23:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:47.072 23:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:47.072 23:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.072 23:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:47.072 23:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.072 23:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:47.329 23:50:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:47.330 23:50:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:47.330 23:50:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:47.330 23:50:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:47.330 23:50:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:49.227 23:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:49.227 23:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:49.227 23:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:49.495 23:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:49.495 23:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:49.495 23:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:49.495 23:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.496 23:50:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:50.159 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:50.159 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:50.159 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:50.159 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:50.159 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:52.685 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:52.685 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:52.685 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:52.685 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:52.685 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:52.685 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:52.685 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.685 23:50:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:53.251 23:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:53.251 23:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:53.251 23:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:53.251 23:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:53.251 23:50:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:55.147 23:50:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:55.147 23:50:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:55.147 23:50:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:55.147 23:50:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:55.147 23:50:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:55.147 23:50:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:55.147 23:50:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.147 23:50:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:56.078 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:56.078 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:56.078 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.078 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:56.078 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:57.976 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:57.976 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:57.976 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:57.976 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:57.976 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.977 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:57.977 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.977 23:50:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:58.910 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:58.910 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:58.910 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.910 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:58.910 23:50:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:01.437 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:01.437 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:01.437 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:01.437 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:01.437 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.437 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:01.437 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.437 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:01.695 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:01.695 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:01.695 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.695 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:01.695 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:04.221 23:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:04.221 23:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:04.221 23:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:04.221 23:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:04.221 23:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:04.221 23:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:04.221 23:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.221 23:50:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:04.787 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:04.787 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:04.787 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.787 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:04.787 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:06.683 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:06.683 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:06.683 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:06.683 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:06.683 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:06.683 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:06.683 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:06.683 [global] 00:26:06.683 thread=1 00:26:06.683 invalidate=1 00:26:06.683 rw=read 00:26:06.683 time_based=1 00:26:06.683 runtime=10 00:26:06.683 ioengine=libaio 00:26:06.683 direct=1 00:26:06.683 bs=262144 00:26:06.683 iodepth=64 00:26:06.683 norandommap=1 00:26:06.683 numjobs=1 00:26:06.683 00:26:06.683 [job0] 00:26:06.683 filename=/dev/nvme0n1 00:26:06.683 [job1] 00:26:06.683 filename=/dev/nvme10n1 00:26:06.683 [job2] 00:26:06.683 filename=/dev/nvme1n1 00:26:06.683 [job3] 00:26:06.683 filename=/dev/nvme2n1 00:26:06.683 [job4] 00:26:06.683 filename=/dev/nvme3n1 00:26:06.683 [job5] 00:26:06.683 filename=/dev/nvme4n1 00:26:06.941 [job6] 00:26:06.941 filename=/dev/nvme5n1 00:26:06.941 [job7] 00:26:06.941 filename=/dev/nvme6n1 00:26:06.941 [job8] 00:26:06.941 filename=/dev/nvme7n1 00:26:06.941 [job9] 00:26:06.941 filename=/dev/nvme8n1 00:26:06.941 [job10] 00:26:06.941 filename=/dev/nvme9n1 00:26:06.941 Could not set queue depth (nvme0n1) 00:26:06.941 Could not set queue depth (nvme10n1) 00:26:06.941 Could not set queue depth (nvme1n1) 00:26:06.941 Could not set queue depth (nvme2n1) 00:26:06.941 Could not set queue depth (nvme3n1) 00:26:06.941 Could not set queue depth (nvme4n1) 00:26:06.941 Could not set queue depth (nvme5n1) 00:26:06.941 Could not set queue depth (nvme6n1) 00:26:06.941 Could not set queue depth (nvme7n1) 00:26:06.941 Could not set queue depth (nvme8n1) 00:26:06.941 Could not set queue depth (nvme9n1) 00:26:07.198 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.198 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.198 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.198 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.198 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.198 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.198 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.198 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.198 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.198 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.198 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.198 fio-3.35 00:26:07.198 Starting 11 threads 00:26:19.401 00:26:19.401 job0: (groupid=0, jobs=1): err= 0: pid=239729: Tue Nov 19 23:50:51 2024 00:26:19.401 read: IOPS=123, BW=31.0MiB/s (32.5MB/s)(317MiB/10218msec) 00:26:19.401 slat (usec): min=14, max=255085, avg=7910.79, stdev=28819.19 00:26:19.401 clat (msec): min=13, max=1448, avg=508.07, stdev=308.41 00:26:19.401 lat (msec): min=13, max=1448, avg=515.99, stdev=313.07 00:26:19.401 clat percentiles (msec): 00:26:19.401 | 1.00th=[ 58], 5.00th=[ 86], 10.00th=[ 140], 20.00th=[ 194], 00:26:19.401 | 30.00th=[ 321], 40.00th=[ 384], 50.00th=[ 464], 60.00th=[ 542], 00:26:19.401 | 70.00th=[ 676], 80.00th=[ 785], 90.00th=[ 961], 95.00th=[ 1070], 00:26:19.401 | 99.00th=[ 1250], 99.50th=[ 1250], 99.90th=[ 1284], 99.95th=[ 1452], 00:26:19.401 | 99.99th=[ 1452] 00:26:19.401 bw ( KiB/s): min= 9216, max=101376, per=5.22%, avg=30771.20, stdev=23443.90, samples=20 00:26:19.401 iops : min= 36, max= 396, avg=120.20, stdev=91.58, samples=20 00:26:19.401 lat (msec) : 20=0.08%, 50=0.63%, 100=6.08%, 250=18.25%, 500=31.04% 00:26:19.401 lat (msec) : 750=19.04%, 1000=16.82%, 2000=8.06% 00:26:19.401 cpu : usr=0.10%, sys=0.46%, ctx=181, majf=0, minf=4097 00:26:19.401 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:26:19.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.401 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.401 issued rwts: total=1266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.401 job1: (groupid=0, jobs=1): err= 0: pid=239742: Tue Nov 19 23:50:51 2024 00:26:19.401 read: IOPS=161, BW=40.4MiB/s (42.4MB/s)(413MiB/10219msec) 00:26:19.401 slat (usec): min=9, max=230989, avg=3957.02, stdev=20471.84 00:26:19.401 clat (usec): min=1866, max=1327.2k, avg=391284.09, stdev=286520.52 00:26:19.401 lat (usec): min=1891, max=1327.2k, avg=395241.10, stdev=290027.96 00:26:19.402 clat percentiles (msec): 00:26:19.402 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 70], 20.00th=[ 112], 00:26:19.402 | 30.00th=[ 171], 40.00th=[ 279], 50.00th=[ 326], 60.00th=[ 426], 00:26:19.402 | 70.00th=[ 550], 80.00th=[ 642], 90.00th=[ 776], 95.00th=[ 936], 00:26:19.402 | 99.00th=[ 1284], 99.50th=[ 1318], 99.90th=[ 1334], 99.95th=[ 1334], 00:26:19.402 | 99.99th=[ 1334] 00:26:19.402 bw ( KiB/s): min=13312, max=146944, per=6.90%, avg=40678.40, stdev=34879.54, samples=20 00:26:19.402 iops : min= 52, max= 574, avg=158.90, stdev=136.25, samples=20 00:26:19.402 lat (msec) : 2=0.12%, 4=0.12%, 10=1.21%, 20=1.94%, 50=5.26% 00:26:19.402 lat (msec) : 100=5.93%, 250=22.57%, 500=28.07%, 750=23.71%, 1000=7.62% 00:26:19.402 lat (msec) : 2000=3.45% 00:26:19.402 cpu : usr=0.09%, sys=0.50%, ctx=283, majf=0, minf=4097 00:26:19.402 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:26:19.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.402 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.402 issued rwts: total=1653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.402 job2: (groupid=0, jobs=1): err= 0: pid=239759: Tue Nov 19 23:50:51 2024 00:26:19.402 read: IOPS=204, BW=51.2MiB/s (53.7MB/s)(523MiB/10220msec) 00:26:19.402 slat (usec): min=9, max=1214.3k, avg=2484.44, stdev=31297.60 00:26:19.402 clat (msec): min=13, max=2231, avg=309.82, stdev=428.50 00:26:19.402 lat (msec): min=13, max=2231, avg=312.30, stdev=431.21 00:26:19.402 clat percentiles (msec): 00:26:19.402 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 47], 00:26:19.402 | 30.00th=[ 55], 40.00th=[ 74], 50.00th=[ 99], 60.00th=[ 150], 00:26:19.402 | 70.00th=[ 275], 80.00th=[ 550], 90.00th=[ 902], 95.00th=[ 1234], 00:26:19.402 | 99.00th=[ 2123], 99.50th=[ 2140], 99.90th=[ 2232], 99.95th=[ 2232], 00:26:19.402 | 99.99th=[ 2232] 00:26:19.402 bw ( KiB/s): min= 5632, max=231936, per=9.78%, avg=57693.83, stdev=69301.29, samples=18 00:26:19.402 iops : min= 22, max= 906, avg=225.33, stdev=270.70, samples=18 00:26:19.402 lat (msec) : 20=0.43%, 50=23.09%, 100=28.11%, 250=14.87%, 500=10.52% 00:26:19.402 lat (msec) : 750=9.51%, 1000=5.59%, 2000=6.02%, >=2000=1.86% 00:26:19.402 cpu : usr=0.14%, sys=0.67%, ctx=348, majf=0, minf=4097 00:26:19.402 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:19.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.402 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.402 job3: (groupid=0, jobs=1): err= 0: pid=239768: Tue Nov 19 23:50:51 2024 00:26:19.402 read: IOPS=70, BW=17.7MiB/s (18.6MB/s)(181MiB/10179msec) 00:26:19.402 slat (usec): min=8, max=998592, avg=12276.07, stdev=67994.62 00:26:19.402 clat (msec): min=3, max=2245, avg=889.29, stdev=682.12 00:26:19.402 lat (msec): min=3, max=2245, avg=901.56, stdev=690.06 00:26:19.402 clat percentiles (msec): 00:26:19.402 | 1.00th=[ 9], 5.00th=[ 19], 10.00th=[ 23], 20.00th=[ 51], 00:26:19.402 | 30.00th=[ 62], 40.00th=[ 978], 50.00th=[ 1045], 60.00th=[ 1217], 00:26:19.402 | 70.00th=[ 1368], 80.00th=[ 1502], 90.00th=[ 1687], 95.00th=[ 1905], 00:26:19.402 | 99.00th=[ 2165], 99.50th=[ 2198], 99.90th=[ 2232], 99.95th=[ 2232], 00:26:19.402 | 99.99th=[ 2232] 00:26:19.402 bw ( KiB/s): min= 2560, max=121856, per=3.17%, avg=18716.44, stdev=26642.66, samples=18 00:26:19.402 iops : min= 10, max= 476, avg=73.11, stdev=104.07, samples=18 00:26:19.402 lat (msec) : 4=0.42%, 10=0.97%, 20=6.79%, 50=11.50%, 100=13.30% 00:26:19.402 lat (msec) : 250=0.14%, 500=4.02%, 1000=3.74%, 2000=55.82%, >=2000=3.32% 00:26:19.402 cpu : usr=0.01%, sys=0.32%, ctx=91, majf=0, minf=4097 00:26:19.402 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:26:19.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.402 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:26:19.402 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.402 job4: (groupid=0, jobs=1): err= 0: pid=239779: Tue Nov 19 23:50:51 2024 00:26:19.402 read: IOPS=255, BW=64.0MiB/s (67.1MB/s)(651MiB/10177msec) 00:26:19.402 slat (usec): min=9, max=594193, avg=2710.43, stdev=16909.22 00:26:19.402 clat (msec): min=25, max=1834, avg=247.08, stdev=263.67 00:26:19.402 lat (msec): min=25, max=1834, avg=249.79, stdev=265.09 00:26:19.402 clat percentiles (msec): 00:26:19.402 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 106], 00:26:19.402 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 123], 60.00th=[ 192], 00:26:19.402 | 70.00th=[ 292], 80.00th=[ 368], 90.00th=[ 498], 95.00th=[ 676], 00:26:19.402 | 99.00th=[ 1485], 99.50th=[ 1754], 99.90th=[ 1838], 99.95th=[ 1838], 00:26:19.402 | 99.99th=[ 1838] 00:26:19.402 bw ( KiB/s): min=16896, max=151040, per=11.61%, avg=68478.00, stdev=45359.73, samples=19 00:26:19.402 iops : min= 66, max= 590, avg=267.47, stdev=177.20, samples=19 00:26:19.402 lat (msec) : 50=5.95%, 100=7.33%, 250=52.44%, 500=24.30%, 750=6.72% 00:26:19.402 lat (msec) : 1000=0.77%, 2000=2.50% 00:26:19.402 cpu : usr=0.16%, sys=0.87%, ctx=666, majf=0, minf=4097 00:26:19.402 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:19.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.402 issued rwts: total=2605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.402 job5: (groupid=0, jobs=1): err= 0: pid=239827: Tue Nov 19 23:50:51 2024 00:26:19.402 read: IOPS=887, BW=222MiB/s (233MB/s)(2241MiB/10105msec) 00:26:19.402 slat (usec): min=11, max=376469, avg=953.52, stdev=5356.68 00:26:19.402 clat (msec): min=2, max=1364, avg=71.12, stdev=90.93 00:26:19.402 lat (msec): min=2, max=1581, avg=72.07, stdev=91.67 00:26:19.402 clat percentiles (msec): 00:26:19.402 | 1.00th=[ 11], 5.00th=[ 17], 10.00th=[ 24], 20.00th=[ 55], 00:26:19.402 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 67], 00:26:19.402 | 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 82], 95.00th=[ 96], 00:26:19.402 | 99.00th=[ 275], 99.50th=[ 894], 99.90th=[ 1318], 99.95th=[ 1334], 00:26:19.402 | 99.99th=[ 1368] 00:26:19.402 bw ( KiB/s): min=29696, max=322560, per=38.65%, avg=227891.20, stdev=62223.42, samples=20 00:26:19.402 iops : min= 116, max= 1260, avg=890.20, stdev=243.06, samples=20 00:26:19.402 lat (msec) : 4=0.03%, 10=0.36%, 20=8.47%, 50=5.37%, 100=81.23% 00:26:19.402 lat (msec) : 250=3.37%, 500=0.44%, 750=0.03%, 1000=0.32%, 2000=0.39% 00:26:19.402 cpu : usr=0.48%, sys=3.14%, ctx=1276, majf=0, minf=4097 00:26:19.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:19.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.402 issued rwts: total=8965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.402 job6: (groupid=0, jobs=1): err= 0: pid=239849: Tue Nov 19 23:50:51 2024 00:26:19.402 read: IOPS=117, BW=29.2MiB/s (30.7MB/s)(299MiB/10222msec) 00:26:19.402 slat (usec): min=13, max=395242, avg=7225.30, stdev=29646.90 00:26:19.402 clat (msec): min=5, max=1670, avg=539.26, stdev=328.64 00:26:19.402 lat (msec): min=5, max=1670, avg=546.48, stdev=333.00 00:26:19.402 clat percentiles (msec): 00:26:19.402 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 90], 20.00th=[ 288], 00:26:19.402 | 30.00th=[ 347], 40.00th=[ 443], 50.00th=[ 527], 60.00th=[ 600], 00:26:19.402 | 70.00th=[ 684], 80.00th=[ 802], 90.00th=[ 953], 95.00th=[ 1133], 00:26:19.402 | 99.00th=[ 1536], 99.50th=[ 1636], 99.90th=[ 1670], 99.95th=[ 1670], 00:26:19.402 | 99.99th=[ 1670] 00:26:19.402 bw ( KiB/s): min=13312, max=52224, per=4.91%, avg=28979.20, stdev=11880.86, samples=20 00:26:19.402 iops : min= 52, max= 204, avg=113.20, stdev=46.41, samples=20 00:26:19.402 lat (msec) : 10=0.92%, 20=3.01%, 50=3.93%, 100=2.68%, 250=7.36% 00:26:19.402 lat (msec) : 500=27.09%, 750=32.69%, 1000=14.46%, 2000=7.86% 00:26:19.402 cpu : usr=0.08%, sys=0.49%, ctx=205, majf=0, minf=4097 00:26:19.402 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:26:19.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.402 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.402 issued rwts: total=1196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.402 job7: (groupid=0, jobs=1): err= 0: pid=239865: Tue Nov 19 23:50:51 2024 00:26:19.402 read: IOPS=247, BW=61.8MiB/s (64.8MB/s)(629MiB/10178msec) 00:26:19.402 slat (usec): min=9, max=339142, avg=2404.27, stdev=16867.69 00:26:19.402 clat (usec): min=1938, max=1867.0k, avg=256247.52, stdev=326650.24 00:26:19.402 lat (msec): min=2, max=1867, avg=258.65, stdev=328.20 00:26:19.402 clat percentiles (msec): 00:26:19.402 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 69], 00:26:19.402 | 30.00th=[ 89], 40.00th=[ 92], 50.00th=[ 103], 60.00th=[ 121], 00:26:19.402 | 70.00th=[ 326], 80.00th=[ 405], 90.00th=[ 751], 95.00th=[ 860], 00:26:19.402 | 99.00th=[ 1720], 99.50th=[ 1754], 99.90th=[ 1770], 99.95th=[ 1770], 00:26:19.402 | 99.99th=[ 1871] 00:26:19.402 bw ( KiB/s): min= 6144, max=211968, per=11.83%, avg=69760.78, stdev=65069.41, samples=18 00:26:19.402 iops : min= 24, max= 828, avg=272.50, stdev=254.17, samples=18 00:26:19.402 lat (msec) : 2=0.04%, 4=0.12%, 10=3.54%, 20=13.16%, 50=2.54% 00:26:19.402 lat (msec) : 100=29.01%, 250=18.76%, 500=17.25%, 750=5.45%, 1000=6.80% 00:26:19.402 lat (msec) : 2000=3.34% 00:26:19.402 cpu : usr=0.19%, sys=0.72%, ctx=925, majf=0, minf=4098 00:26:19.402 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:19.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.402 issued rwts: total=2516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.402 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.402 job8: (groupid=0, jobs=1): err= 0: pid=239888: Tue Nov 19 23:50:51 2024 00:26:19.402 read: IOPS=92, BW=23.2MiB/s (24.3MB/s)(235MiB/10137msec) 00:26:19.402 slat (usec): min=13, max=844327, avg=9785.60, stdev=53022.39 00:26:19.402 clat (msec): min=29, max=1414, avg=679.11, stdev=349.50 00:26:19.402 lat (msec): min=29, max=2021, avg=688.89, stdev=357.98 00:26:19.402 clat percentiles (msec): 00:26:19.403 | 1.00th=[ 46], 5.00th=[ 88], 10.00th=[ 144], 20.00th=[ 481], 00:26:19.403 | 30.00th=[ 567], 40.00th=[ 600], 50.00th=[ 651], 60.00th=[ 718], 00:26:19.403 | 70.00th=[ 844], 80.00th=[ 953], 90.00th=[ 1200], 95.00th=[ 1284], 00:26:19.403 | 99.00th=[ 1368], 99.50th=[ 1368], 99.90th=[ 1418], 99.95th=[ 1418], 00:26:19.403 | 99.99th=[ 1418] 00:26:19.403 bw ( KiB/s): min= 5632, max=86528, per=4.23%, avg=24945.78, stdev=17599.88, samples=18 00:26:19.403 iops : min= 22, max= 338, avg=97.44, stdev=68.75, samples=18 00:26:19.403 lat (msec) : 50=1.81%, 100=4.46%, 250=11.80%, 500=2.23%, 750=41.98% 00:26:19.403 lat (msec) : 1000=19.13%, 2000=18.60% 00:26:19.403 cpu : usr=0.06%, sys=0.43%, ctx=118, majf=0, minf=4097 00:26:19.403 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:26:19.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.403 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.403 issued rwts: total=941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.403 job9: (groupid=0, jobs=1): err= 0: pid=239889: Tue Nov 19 23:50:51 2024 00:26:19.403 read: IOPS=66, BW=16.7MiB/s (17.5MB/s)(170MiB/10209msec) 00:26:19.403 slat (usec): min=13, max=860487, avg=12261.60, stdev=59170.91 00:26:19.403 clat (msec): min=188, max=1996, avg=947.72, stdev=341.41 00:26:19.403 lat (msec): min=380, max=2148, avg=959.99, stdev=347.89 00:26:19.403 clat percentiles (msec): 00:26:19.403 | 1.00th=[ 439], 5.00th=[ 485], 10.00th=[ 558], 20.00th=[ 684], 00:26:19.403 | 30.00th=[ 751], 40.00th=[ 810], 50.00th=[ 869], 60.00th=[ 953], 00:26:19.403 | 70.00th=[ 1083], 80.00th=[ 1167], 90.00th=[ 1418], 95.00th=[ 1636], 00:26:19.403 | 99.00th=[ 1989], 99.50th=[ 1989], 99.90th=[ 2005], 99.95th=[ 2005], 00:26:19.403 | 99.99th=[ 2005] 00:26:19.403 bw ( KiB/s): min= 3584, max=32256, per=2.97%, avg=17521.78, stdev=7175.64, samples=18 00:26:19.403 iops : min= 14, max= 126, avg=68.44, stdev=28.03, samples=18 00:26:19.403 lat (msec) : 250=0.15%, 500=5.74%, 750=23.68%, 1000=35.59%, 2000=34.85% 00:26:19.403 cpu : usr=0.08%, sys=0.27%, ctx=79, majf=0, minf=4097 00:26:19.403 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:26:19.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.403 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:26:19.403 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.403 job10: (groupid=0, jobs=1): err= 0: pid=239893: Tue Nov 19 23:50:51 2024 00:26:19.403 read: IOPS=88, BW=22.2MiB/s (23.3MB/s)(227MiB/10216msec) 00:26:19.403 slat (usec): min=8, max=534152, avg=4224.66, stdev=31275.48 00:26:19.403 clat (msec): min=56, max=1595, avg=715.12, stdev=322.38 00:26:19.403 lat (msec): min=56, max=1595, avg=719.34, stdev=323.90 00:26:19.403 clat percentiles (msec): 00:26:19.403 | 1.00th=[ 59], 5.00th=[ 199], 10.00th=[ 309], 20.00th=[ 409], 00:26:19.403 | 30.00th=[ 584], 40.00th=[ 642], 50.00th=[ 693], 60.00th=[ 793], 00:26:19.403 | 70.00th=[ 852], 80.00th=[ 969], 90.00th=[ 1116], 95.00th=[ 1368], 00:26:19.403 | 99.00th=[ 1519], 99.50th=[ 1519], 99.90th=[ 1603], 99.95th=[ 1603], 00:26:19.403 | 99.99th=[ 1603] 00:26:19.403 bw ( KiB/s): min= 1536, max=40960, per=3.66%, avg=21606.40, stdev=11533.44, samples=20 00:26:19.403 iops : min= 6, max= 160, avg=84.40, stdev=45.05, samples=20 00:26:19.403 lat (msec) : 100=3.63%, 250=2.42%, 500=17.73%, 750=30.29%, 1000=29.74% 00:26:19.403 lat (msec) : 2000=16.19% 00:26:19.403 cpu : usr=0.01%, sys=0.33%, ctx=127, majf=0, minf=3721 00:26:19.403 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:26:19.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.403 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.403 issued rwts: total=908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.403 00:26:19.403 Run status group 0 (all jobs): 00:26:19.403 READ: bw=576MiB/s (604MB/s), 16.7MiB/s-222MiB/s (17.5MB/s-233MB/s), io=5886MiB (6172MB), run=10105-10222msec 00:26:19.403 00:26:19.403 Disk stats (read/write): 00:26:19.403 nvme0n1: ios=2456/0, merge=0/0, ticks=1236209/0, in_queue=1236209, util=97.20% 00:26:19.403 nvme10n1: ios=3224/0, merge=0/0, ticks=1245734/0, in_queue=1245734, util=97.40% 00:26:19.403 nvme1n1: ios=4106/0, merge=0/0, ticks=1259352/0, in_queue=1259352, util=97.69% 00:26:19.403 nvme2n1: ios=1442/0, merge=0/0, ticks=1274314/0, in_queue=1274314, util=97.83% 00:26:19.403 nvme3n1: ios=5083/0, merge=0/0, ticks=1161477/0, in_queue=1161477, util=97.86% 00:26:19.403 nvme4n1: ios=17749/0, merge=0/0, ticks=1234661/0, in_queue=1234661, util=98.23% 00:26:19.403 nvme5n1: ios=2285/0, merge=0/0, ticks=1225825/0, in_queue=1225825, util=98.46% 00:26:19.403 nvme6n1: ios=4906/0, merge=0/0, ticks=1179517/0, in_queue=1179517, util=98.52% 00:26:19.403 nvme7n1: ios=1696/0, merge=0/0, ticks=1219006/0, in_queue=1219006, util=98.92% 00:26:19.403 nvme8n1: ios=1306/0, merge=0/0, ticks=1249338/0, in_queue=1249338, util=99.13% 00:26:19.403 nvme9n1: ios=1785/0, merge=0/0, ticks=1276297/0, in_queue=1276297, util=99.29% 00:26:19.403 23:50:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:19.403 [global] 00:26:19.403 thread=1 00:26:19.403 invalidate=1 00:26:19.403 rw=randwrite 00:26:19.403 time_based=1 00:26:19.403 runtime=10 00:26:19.403 ioengine=libaio 00:26:19.403 direct=1 00:26:19.403 bs=262144 00:26:19.403 iodepth=64 00:26:19.403 norandommap=1 00:26:19.403 numjobs=1 00:26:19.403 00:26:19.403 [job0] 00:26:19.403 filename=/dev/nvme0n1 00:26:19.403 [job1] 00:26:19.403 filename=/dev/nvme10n1 00:26:19.403 [job2] 00:26:19.403 filename=/dev/nvme1n1 00:26:19.403 [job3] 00:26:19.403 filename=/dev/nvme2n1 00:26:19.403 [job4] 00:26:19.403 filename=/dev/nvme3n1 00:26:19.403 [job5] 00:26:19.403 filename=/dev/nvme4n1 00:26:19.403 [job6] 00:26:19.403 filename=/dev/nvme5n1 00:26:19.403 [job7] 00:26:19.403 filename=/dev/nvme6n1 00:26:19.403 [job8] 00:26:19.403 filename=/dev/nvme7n1 00:26:19.403 [job9] 00:26:19.403 filename=/dev/nvme8n1 00:26:19.403 [job10] 00:26:19.403 filename=/dev/nvme9n1 00:26:19.403 Could not set queue depth (nvme0n1) 00:26:19.403 Could not set queue depth (nvme10n1) 00:26:19.403 Could not set queue depth (nvme1n1) 00:26:19.403 Could not set queue depth (nvme2n1) 00:26:19.403 Could not set queue depth (nvme3n1) 00:26:19.403 Could not set queue depth (nvme4n1) 00:26:19.403 Could not set queue depth (nvme5n1) 00:26:19.403 Could not set queue depth (nvme6n1) 00:26:19.403 Could not set queue depth (nvme7n1) 00:26:19.403 Could not set queue depth (nvme8n1) 00:26:19.403 Could not set queue depth (nvme9n1) 00:26:19.403 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.403 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.403 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.403 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.403 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.403 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.403 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.403 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.403 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.403 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.403 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.403 fio-3.35 00:26:19.403 Starting 11 threads 00:26:29.367 00:26:29.367 job0: (groupid=0, jobs=1): err= 0: pid=240470: Tue Nov 19 23:51:02 2024 00:26:29.367 write: IOPS=192, BW=48.2MiB/s (50.6MB/s)(497MiB/10303msec); 0 zone resets 00:26:29.367 slat (usec): min=21, max=194166, avg=4311.04, stdev=12994.68 00:26:29.367 clat (usec): min=1298, max=1208.4k, avg=327229.39, stdev=282716.20 00:26:29.367 lat (usec): min=1990, max=1208.4k, avg=331540.44, stdev=286378.53 00:26:29.367 clat percentiles (msec): 00:26:29.367 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 13], 20.00th=[ 24], 00:26:29.367 | 30.00th=[ 64], 40.00th=[ 207], 50.00th=[ 296], 60.00th=[ 376], 00:26:29.367 | 70.00th=[ 477], 80.00th=[ 609], 90.00th=[ 751], 95.00th=[ 827], 00:26:29.367 | 99.00th=[ 995], 99.50th=[ 1116], 99.90th=[ 1217], 99.95th=[ 1217], 00:26:29.367 | 99.99th=[ 1217] 00:26:29.367 bw ( KiB/s): min=14336, max=293888, per=5.86%, avg=49234.35, stdev=60459.62, samples=20 00:26:29.367 iops : min= 56, max= 1148, avg=192.30, stdev=236.17, samples=20 00:26:29.367 lat (msec) : 2=0.10%, 4=0.70%, 10=5.28%, 20=7.00%, 50=15.95% 00:26:29.367 lat (msec) : 100=3.02%, 250=11.73%, 500=28.43%, 750=17.51%, 1000=9.36% 00:26:29.367 lat (msec) : 2000=0.91% 00:26:29.367 cpu : usr=0.49%, sys=0.80%, ctx=1142, majf=0, minf=1 00:26:29.367 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:26:29.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.367 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.367 issued rwts: total=0,1987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.367 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.367 job1: (groupid=0, jobs=1): err= 0: pid=240476: Tue Nov 19 23:51:02 2024 00:26:29.367 write: IOPS=178, BW=44.7MiB/s (46.9MB/s)(461MiB/10302msec); 0 zone resets 00:26:29.367 slat (usec): min=22, max=170799, avg=4138.34, stdev=11965.18 00:26:29.367 clat (usec): min=835, max=1267.4k, avg=353234.52, stdev=255692.76 00:26:29.367 lat (usec): min=865, max=1267.5k, avg=357372.86, stdev=258748.84 00:26:29.367 clat percentiles (msec): 00:26:29.367 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 29], 20.00th=[ 126], 00:26:29.367 | 30.00th=[ 224], 40.00th=[ 271], 50.00th=[ 296], 60.00th=[ 321], 00:26:29.367 | 70.00th=[ 443], 80.00th=[ 567], 90.00th=[ 751], 95.00th=[ 835], 00:26:29.367 | 99.00th=[ 1062], 99.50th=[ 1200], 99.90th=[ 1267], 99.95th=[ 1267], 00:26:29.367 | 99.99th=[ 1267] 00:26:29.367 bw ( KiB/s): min=14336, max=122368, per=5.42%, avg=45522.20, stdev=25242.62, samples=20 00:26:29.367 iops : min= 56, max= 478, avg=177.80, stdev=98.60, samples=20 00:26:29.367 lat (usec) : 1000=0.11% 00:26:29.367 lat (msec) : 2=0.33%, 4=3.31%, 10=1.25%, 20=0.54%, 50=9.07% 00:26:29.367 lat (msec) : 100=3.09%, 250=18.35%, 500=38.11%, 750=15.58%, 1000=8.85% 00:26:29.367 lat (msec) : 2000=1.41% 00:26:29.367 cpu : usr=0.59%, sys=0.64%, ctx=924, majf=0, minf=1 00:26:29.367 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:29.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.367 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.367 issued rwts: total=0,1842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.367 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.367 job2: (groupid=0, jobs=1): err= 0: pid=240485: Tue Nov 19 23:51:02 2024 00:26:29.367 write: IOPS=196, BW=49.2MiB/s (51.6MB/s)(507MiB/10297msec); 0 zone resets 00:26:29.367 slat (usec): min=14, max=154851, avg=2947.78, stdev=10535.24 00:26:29.367 clat (usec): min=699, max=1141.9k, avg=321999.15, stdev=247019.55 00:26:29.367 lat (usec): min=742, max=1141.9k, avg=324946.93, stdev=249317.91 00:26:29.367 clat percentiles (usec): 00:26:29.367 | 1.00th=[ 1369], 5.00th=[ 5604], 10.00th=[ 10945], 00:26:29.367 | 20.00th=[ 47449], 30.00th=[ 124257], 40.00th=[ 270533], 00:26:29.367 | 50.00th=[ 325059], 60.00th=[ 358613], 70.00th=[ 417334], 00:26:29.367 | 80.00th=[ 557843], 90.00th=[ 658506], 95.00th=[ 742392], 00:26:29.367 | 99.00th=[ 943719], 99.50th=[1052771], 99.90th=[1149240], 00:26:29.367 | 99.95th=[1149240], 99.99th=[1149240] 00:26:29.368 bw ( KiB/s): min=15872, max=128512, per=5.98%, avg=50227.20, stdev=29387.24, samples=20 00:26:29.368 iops : min= 62, max= 502, avg=196.20, stdev=114.79, samples=20 00:26:29.368 lat (usec) : 750=0.20%, 1000=0.39% 00:26:29.368 lat (msec) : 2=1.43%, 4=1.04%, 10=6.47%, 20=4.59%, 50=6.42% 00:26:29.368 lat (msec) : 100=6.81%, 250=11.20%, 500=37.51%, 750=19.20%, 1000=4.00% 00:26:29.368 lat (msec) : 2000=0.74% 00:26:29.368 cpu : usr=0.51%, sys=0.74%, ctx=1368, majf=0, minf=1 00:26:29.368 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:29.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.368 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.368 issued rwts: total=0,2026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.368 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.368 job3: (groupid=0, jobs=1): err= 0: pid=240486: Tue Nov 19 23:51:02 2024 00:26:29.368 write: IOPS=420, BW=105MiB/s (110MB/s)(1083MiB/10295msec); 0 zone resets 00:26:29.368 slat (usec): min=15, max=372102, avg=1563.86, stdev=8268.43 00:26:29.368 clat (usec): min=768, max=1019.9k, avg=150333.82, stdev=193795.13 00:26:29.368 lat (usec): min=807, max=1029.3k, avg=151897.68, stdev=195683.79 00:26:29.368 clat percentiles (usec): 00:26:29.368 | 1.00th=[ 1106], 5.00th=[ 5604], 10.00th=[ 14615], 00:26:29.368 | 20.00th=[ 34866], 30.00th=[ 54264], 40.00th=[ 58459], 00:26:29.368 | 50.00th=[ 61080], 60.00th=[ 74974], 70.00th=[ 123208], 00:26:29.368 | 80.00th=[ 235930], 90.00th=[ 434111], 95.00th=[ 599786], 00:26:29.368 | 99.00th=[ 868221], 99.50th=[ 918553], 99.90th=[ 985662], 00:26:29.368 | 99.95th=[1002439], 99.99th=[1019216] 00:26:29.368 bw ( KiB/s): min=20480, max=283136, per=13.00%, avg=109312.00, stdev=88533.31, samples=20 00:26:29.368 iops : min= 80, max= 1106, avg=427.00, stdev=345.83, samples=20 00:26:29.368 lat (usec) : 1000=0.51% 00:26:29.368 lat (msec) : 2=2.84%, 4=1.20%, 10=1.50%, 20=7.09%, 50=12.90% 00:26:29.368 lat (msec) : 100=39.21%, 250=15.35%, 500=12.60%, 750=4.11%, 1000=2.63% 00:26:29.368 lat (msec) : 2000=0.07% 00:26:29.368 cpu : usr=1.19%, sys=1.76%, ctx=2560, majf=0, minf=1 00:26:29.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:29.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.368 issued rwts: total=0,4333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.368 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.368 job4: (groupid=0, jobs=1): err= 0: pid=240487: Tue Nov 19 23:51:02 2024 00:26:29.368 write: IOPS=300, BW=75.2MiB/s (78.8MB/s)(769MiB/10228msec); 0 zone resets 00:26:29.368 slat (usec): min=15, max=251907, avg=2285.31, stdev=9787.15 00:26:29.368 clat (usec): min=927, max=1259.3k, avg=210486.14, stdev=233545.81 00:26:29.368 lat (usec): min=996, max=1259.4k, avg=212771.45, stdev=235642.48 00:26:29.368 clat percentiles (msec): 00:26:29.368 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 24], 20.00th=[ 52], 00:26:29.368 | 30.00th=[ 61], 40.00th=[ 84], 50.00th=[ 117], 60.00th=[ 163], 00:26:29.368 | 70.00th=[ 241], 80.00th=[ 321], 90.00th=[ 523], 95.00th=[ 693], 00:26:29.368 | 99.00th=[ 1053], 99.50th=[ 1133], 99.90th=[ 1250], 99.95th=[ 1250], 00:26:29.368 | 99.99th=[ 1267] 00:26:29.368 bw ( KiB/s): min=16384, max=276480, per=9.17%, avg=77085.05, stdev=71122.01, samples=20 00:26:29.368 iops : min= 64, max= 1080, avg=301.10, stdev=277.83, samples=20 00:26:29.368 lat (usec) : 1000=0.03% 00:26:29.368 lat (msec) : 2=0.46%, 4=1.59%, 10=3.67%, 20=3.32%, 50=8.26% 00:26:29.368 lat (msec) : 100=29.27%, 250=24.42%, 500=18.15%, 750=7.02%, 1000=1.72% 00:26:29.368 lat (msec) : 2000=2.08% 00:26:29.368 cpu : usr=0.86%, sys=0.93%, ctx=1608, majf=0, minf=1 00:26:29.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:29.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.368 issued rwts: total=0,3075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.368 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.368 job5: (groupid=0, jobs=1): err= 0: pid=240488: Tue Nov 19 23:51:02 2024 00:26:29.368 write: IOPS=313, BW=78.4MiB/s (82.2MB/s)(808MiB/10309msec); 0 zone resets 00:26:29.368 slat (usec): min=21, max=284617, avg=2229.01, stdev=10668.48 00:26:29.368 clat (usec): min=1033, max=1170.4k, avg=201284.12, stdev=230390.24 00:26:29.368 lat (usec): min=1743, max=1450.1k, avg=203513.13, stdev=232812.85 00:26:29.368 clat percentiles (msec): 00:26:29.368 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 44], 00:26:29.368 | 30.00th=[ 52], 40.00th=[ 65], 50.00th=[ 80], 60.00th=[ 144], 00:26:29.368 | 70.00th=[ 241], 80.00th=[ 372], 90.00th=[ 550], 95.00th=[ 709], 00:26:29.368 | 99.00th=[ 986], 99.50th=[ 1053], 99.90th=[ 1167], 99.95th=[ 1167], 00:26:29.368 | 99.99th=[ 1167] 00:26:29.368 bw ( KiB/s): min=16384, max=266240, per=9.65%, avg=81108.05, stdev=74548.66, samples=20 00:26:29.368 iops : min= 64, max= 1040, avg=316.80, stdev=291.21, samples=20 00:26:29.368 lat (msec) : 2=0.19%, 4=1.39%, 10=2.66%, 20=7.58%, 50=16.31% 00:26:29.368 lat (msec) : 100=27.78%, 250=15.62%, 500=15.93%, 750=8.94%, 1000=2.69% 00:26:29.368 lat (msec) : 2000=0.90% 00:26:29.368 cpu : usr=0.89%, sys=1.14%, ctx=2008, majf=0, minf=1 00:26:29.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:29.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.368 issued rwts: total=0,3232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.368 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.368 job6: (groupid=0, jobs=1): err= 0: pid=240490: Tue Nov 19 23:51:02 2024 00:26:29.368 write: IOPS=446, BW=112MiB/s (117MB/s)(1122MiB/10050msec); 0 zone resets 00:26:29.368 slat (usec): min=18, max=177012, avg=1228.97, stdev=5806.23 00:26:29.368 clat (usec): min=816, max=924836, avg=141634.87, stdev=165518.69 00:26:29.368 lat (usec): min=841, max=940710, avg=142863.84, stdev=166775.34 00:26:29.368 clat percentiles (usec): 00:26:29.368 | 1.00th=[ 1483], 5.00th=[ 4293], 10.00th=[ 9503], 20.00th=[ 30802], 00:26:29.368 | 30.00th=[ 44827], 40.00th=[ 51643], 50.00th=[ 73925], 60.00th=[100140], 00:26:29.368 | 70.00th=[170918], 80.00th=[235930], 90.00th=[371196], 95.00th=[459277], 00:26:29.368 | 99.00th=[834667], 99.50th=[859833], 99.90th=[918553], 99.95th=[918553], 00:26:29.368 | 99.99th=[926942] 00:26:29.368 bw ( KiB/s): min=30208, max=338944, per=13.47%, avg=113234.85, stdev=87817.98, samples=20 00:26:29.368 iops : min= 118, max= 1324, avg=442.30, stdev=343.05, samples=20 00:26:29.368 lat (usec) : 1000=0.29% 00:26:29.368 lat (msec) : 2=1.16%, 4=3.30%, 10=5.77%, 20=5.95%, 50=22.54% 00:26:29.368 lat (msec) : 100=20.95%, 250=21.22%, 500=15.31%, 750=1.52%, 1000=1.98% 00:26:29.368 cpu : usr=1.14%, sys=1.65%, ctx=3092, majf=0, minf=1 00:26:29.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:29.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.368 issued rwts: total=0,4486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.368 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.368 job7: (groupid=0, jobs=1): err= 0: pid=240491: Tue Nov 19 23:51:02 2024 00:26:29.368 write: IOPS=254, BW=63.6MiB/s (66.7MB/s)(655MiB/10300msec); 0 zone resets 00:26:29.368 slat (usec): min=22, max=440522, avg=2763.23, stdev=12218.68 00:26:29.368 clat (usec): min=1048, max=1490.9k, avg=248669.18, stdev=230679.70 00:26:29.368 lat (usec): min=1086, max=1490.9k, avg=251432.41, stdev=232944.88 00:26:29.368 clat percentiles (msec): 00:26:29.368 | 1.00th=[ 4], 5.00th=[ 35], 10.00th=[ 67], 20.00th=[ 94], 00:26:29.368 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 126], 60.00th=[ 199], 00:26:29.368 | 70.00th=[ 305], 80.00th=[ 430], 90.00th=[ 617], 95.00th=[ 743], 00:26:29.368 | 99.00th=[ 969], 99.50th=[ 995], 99.90th=[ 1200], 99.95th=[ 1385], 00:26:29.368 | 99.99th=[ 1485] 00:26:29.369 bw ( KiB/s): min= 2560, max=170496, per=7.78%, avg=65440.30, stdev=44207.89, samples=20 00:26:29.369 iops : min= 10, max= 666, avg=255.60, stdev=172.69, samples=20 00:26:29.369 lat (msec) : 2=0.57%, 4=1.41%, 10=1.53%, 20=0.42%, 50=2.98% 00:26:29.369 lat (msec) : 100=28.94%, 250=28.29%, 500=21.04%, 750=10.08%, 1000=4.28% 00:26:29.369 lat (msec) : 2000=0.46% 00:26:29.369 cpu : usr=0.71%, sys=1.04%, ctx=1369, majf=0, minf=2 00:26:29.369 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:29.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.369 issued rwts: total=0,2619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.369 job8: (groupid=0, jobs=1): err= 0: pid=240492: Tue Nov 19 23:51:02 2024 00:26:29.369 write: IOPS=323, BW=81.0MiB/s (84.9MB/s)(833MiB/10283msec); 0 zone resets 00:26:29.369 slat (usec): min=15, max=108024, avg=1872.59, stdev=7464.73 00:26:29.369 clat (usec): min=748, max=1162.4k, avg=195628.93, stdev=204738.09 00:26:29.369 lat (usec): min=774, max=1162.4k, avg=197501.52, stdev=206829.29 00:26:29.369 clat percentiles (usec): 00:26:29.369 | 1.00th=[ 1319], 5.00th=[ 4228], 10.00th=[ 10683], 00:26:29.369 | 20.00th=[ 27657], 30.00th=[ 63701], 40.00th=[ 76022], 00:26:29.369 | 50.00th=[ 130548], 60.00th=[ 189793], 70.00th=[ 274727], 00:26:29.369 | 80.00th=[ 333448], 90.00th=[ 392168], 95.00th=[ 700449], 00:26:29.369 | 99.00th=[ 893387], 99.50th=[ 985662], 99.90th=[1115685], 00:26:29.369 | 99.95th=[1115685], 99.99th=[1166017] 00:26:29.369 bw ( KiB/s): min=20480, max=221696, per=9.95%, avg=83640.30, stdev=54692.30, samples=20 00:26:29.369 iops : min= 80, max= 866, avg=326.70, stdev=213.65, samples=20 00:26:29.369 lat (usec) : 750=0.03%, 1000=0.36% 00:26:29.369 lat (msec) : 2=2.16%, 4=1.95%, 10=5.08%, 20=6.67%, 50=11.50% 00:26:29.369 lat (msec) : 100=16.58%, 250=23.36%, 500=25.02%, 750=3.57%, 1000=3.33% 00:26:29.369 lat (msec) : 2000=0.39% 00:26:29.369 cpu : usr=0.86%, sys=1.17%, ctx=2275, majf=0, minf=1 00:26:29.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:29.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.369 issued rwts: total=0,3330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.369 job9: (groupid=0, jobs=1): err= 0: pid=240493: Tue Nov 19 23:51:02 2024 00:26:29.369 write: IOPS=276, BW=69.2MiB/s (72.6MB/s)(697MiB/10062msec); 0 zone resets 00:26:29.369 slat (usec): min=21, max=68925, avg=2513.02, stdev=6958.56 00:26:29.369 clat (usec): min=1274, max=824724, avg=228478.56, stdev=140972.57 00:26:29.369 lat (usec): min=1442, max=824771, avg=230991.58, stdev=142271.23 00:26:29.369 clat percentiles (msec): 00:26:29.369 | 1.00th=[ 5], 5.00th=[ 23], 10.00th=[ 60], 20.00th=[ 105], 00:26:29.369 | 30.00th=[ 134], 40.00th=[ 182], 50.00th=[ 215], 60.00th=[ 253], 00:26:29.369 | 70.00th=[ 309], 80.00th=[ 351], 90.00th=[ 384], 95.00th=[ 405], 00:26:29.369 | 99.00th=[ 726], 99.50th=[ 768], 99.90th=[ 818], 99.95th=[ 818], 00:26:29.369 | 99.99th=[ 827] 00:26:29.369 bw ( KiB/s): min=22016, max=171008, per=8.29%, avg=69713.00, stdev=33801.96, samples=20 00:26:29.369 iops : min= 86, max= 668, avg=272.30, stdev=132.05, samples=20 00:26:29.369 lat (msec) : 2=0.22%, 4=0.61%, 10=1.69%, 20=2.05%, 50=4.24% 00:26:29.369 lat (msec) : 100=10.23%, 250=40.60%, 500=37.15%, 750=2.48%, 1000=0.75% 00:26:29.369 cpu : usr=0.89%, sys=0.76%, ctx=1471, majf=0, minf=1 00:26:29.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:26:29.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.369 issued rwts: total=0,2786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.369 job10: (groupid=0, jobs=1): err= 0: pid=240494: Tue Nov 19 23:51:02 2024 00:26:29.369 write: IOPS=411, BW=103MiB/s (108MB/s)(1034MiB/10050msec); 0 zone resets 00:26:29.369 slat (usec): min=21, max=58718, avg=2084.51, stdev=5136.84 00:26:29.369 clat (msec): min=3, max=780, avg=153.42, stdev=112.97 00:26:29.369 lat (msec): min=3, max=793, avg=155.50, stdev=114.30 00:26:29.369 clat percentiles (msec): 00:26:29.369 | 1.00th=[ 12], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 79], 00:26:29.369 | 30.00th=[ 92], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 150], 00:26:29.369 | 70.00th=[ 192], 80.00th=[ 222], 90.00th=[ 300], 95.00th=[ 334], 00:26:29.369 | 99.00th=[ 693], 99.50th=[ 751], 99.90th=[ 768], 99.95th=[ 776], 00:26:29.369 | 99.99th=[ 785] 00:26:29.369 bw ( KiB/s): min=30208, max=202752, per=12.40%, avg=104248.20, stdev=50022.82, samples=20 00:26:29.369 iops : min= 118, max= 792, avg=407.20, stdev=195.42, samples=20 00:26:29.369 lat (msec) : 4=0.02%, 10=0.51%, 20=1.81%, 50=9.31%, 100=33.04% 00:26:29.369 lat (msec) : 250=39.40%, 500=14.37%, 750=1.06%, 1000=0.48% 00:26:29.369 cpu : usr=1.31%, sys=1.25%, ctx=1499, majf=0, minf=2 00:26:29.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:29.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.369 issued rwts: total=0,4135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.369 00:26:29.369 Run status group 0 (all jobs): 00:26:29.369 WRITE: bw=821MiB/s (861MB/s), 44.7MiB/s-112MiB/s (46.9MB/s-117MB/s), io=8463MiB (8874MB), run=10050-10309msec 00:26:29.369 00:26:29.369 Disk stats (read/write): 00:26:29.369 nvme0n1: ios=50/3900, merge=0/0, ticks=1771/1211275, in_queue=1213046, util=99.94% 00:26:29.369 nvme10n1: ios=43/3612, merge=0/0, ticks=396/1216467, in_queue=1216863, util=100.00% 00:26:29.369 nvme1n1: ios=51/3986, merge=0/0, ticks=725/1228133, in_queue=1228858, util=100.00% 00:26:29.369 nvme2n1: ios=49/8602, merge=0/0, ticks=818/1234066, in_queue=1234884, util=100.00% 00:26:29.369 nvme3n1: ios=0/6110, merge=0/0, ticks=0/1239409, in_queue=1239409, util=97.89% 00:26:29.369 nvme4n1: ios=46/6374, merge=0/0, ticks=2948/1151862, in_queue=1154810, util=100.00% 00:26:29.369 nvme5n1: ios=41/8691, merge=0/0, ticks=3622/1210389, in_queue=1214011, util=100.00% 00:26:29.369 nvme6n1: ios=40/5169, merge=0/0, ticks=1184/1200538, in_queue=1201722, util=100.00% 00:26:29.369 nvme7n1: ios=0/6604, merge=0/0, ticks=0/1226739, in_queue=1226739, util=98.85% 00:26:29.369 nvme8n1: ios=39/5322, merge=0/0, ticks=725/1221104, in_queue=1221829, util=100.00% 00:26:29.369 nvme9n1: ios=0/8034, merge=0/0, ticks=0/1217762, in_queue=1217762, util=99.11% 00:26:29.369 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:29.369 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:29.369 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.369 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:29.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.369 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:29.369 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.370 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:29.628 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.628 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:29.886 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:29.886 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:29.886 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:29.886 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:29.886 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:29.886 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:29.886 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:29.886 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:29.886 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:29.886 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.886 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.886 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.886 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.886 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:30.144 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:30.144 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:30.144 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:30.145 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.145 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:30.403 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:30.403 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.403 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:30.662 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.662 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:30.921 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:30.921 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:30.921 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.921 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.921 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:30.921 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.921 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:30.921 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.921 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:30.921 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.921 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:30.921 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:30.921 rmmod nvme_tcp 00:26:30.921 rmmod nvme_fabrics 00:26:30.921 rmmod nvme_keyring 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 235485 ']' 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 235485 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 235485 ']' 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 235485 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235485 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235485' 00:26:30.921 killing process with pid 235485 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 235485 00:26:30.921 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 235485 00:26:31.488 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:31.488 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:31.488 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:31.488 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:31.488 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:31.488 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:31.488 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:31.488 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:31.488 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:31.488 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.488 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.488 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.394 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:33.394 00:26:33.394 real 1m1.099s 00:26:33.394 user 3m35.419s 00:26:33.394 sys 0m15.019s 00:26:33.394 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.394 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.394 ************************************ 00:26:33.394 END TEST nvmf_multiconnection 00:26:33.394 ************************************ 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:33.653 ************************************ 00:26:33.653 START TEST nvmf_initiator_timeout 00:26:33.653 ************************************ 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:33.653 * Looking for test storage... 00:26:33.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:33.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.653 --rc genhtml_branch_coverage=1 00:26:33.653 --rc genhtml_function_coverage=1 00:26:33.653 --rc genhtml_legend=1 00:26:33.653 --rc geninfo_all_blocks=1 00:26:33.653 --rc geninfo_unexecuted_blocks=1 00:26:33.653 00:26:33.653 ' 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:33.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.653 --rc genhtml_branch_coverage=1 00:26:33.653 --rc genhtml_function_coverage=1 00:26:33.653 --rc genhtml_legend=1 00:26:33.653 --rc geninfo_all_blocks=1 00:26:33.653 --rc geninfo_unexecuted_blocks=1 00:26:33.653 00:26:33.653 ' 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:33.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.653 --rc genhtml_branch_coverage=1 00:26:33.653 --rc genhtml_function_coverage=1 00:26:33.653 --rc genhtml_legend=1 00:26:33.653 --rc geninfo_all_blocks=1 00:26:33.653 --rc geninfo_unexecuted_blocks=1 00:26:33.653 00:26:33.653 ' 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:33.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.653 --rc genhtml_branch_coverage=1 00:26:33.653 --rc genhtml_function_coverage=1 00:26:33.653 --rc genhtml_legend=1 00:26:33.653 --rc geninfo_all_blocks=1 00:26:33.653 --rc geninfo_unexecuted_blocks=1 00:26:33.653 00:26:33.653 ' 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.653 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:33.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.654 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.222 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:36.223 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:36.223 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:36.223 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:36.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.223 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:26:36.223 00:26:36.223 --- 10.0.0.2 ping statistics --- 00:26:36.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.223 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:26:36.223 00:26:36.223 --- 10.0.0.1 ping statistics --- 00:26:36.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.223 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:36.223 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=244282 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 244282 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 244282 ']' 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:36.224 [2024-11-19 23:51:10.174029] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:26:36.224 [2024-11-19 23:51:10.174134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.224 [2024-11-19 23:51:10.252407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:36.224 [2024-11-19 23:51:10.302993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.224 [2024-11-19 23:51:10.303060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.224 [2024-11-19 23:51:10.303084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.224 [2024-11-19 23:51:10.303107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.224 [2024-11-19 23:51:10.303119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.224 [2024-11-19 23:51:10.304791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.224 [2024-11-19 23:51:10.304848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.224 [2024-11-19 23:51:10.304901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.224 [2024-11-19 23:51:10.304904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:36.224 Malloc0 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:36.224 Delay0 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:36.224 [2024-11-19 23:51:10.500741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.224 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:36.224 [2024-11-19 23:51:10.529077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.482 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.483 23:51:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:37.049 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:37.049 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:37.049 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:37.049 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:37.049 23:51:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:38.948 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:38.948 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:38.948 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:38.948 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:38.948 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:38.948 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:38.948 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=244589 00:26:38.948 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:38.948 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:38.948 [global] 00:26:38.948 thread=1 00:26:38.948 invalidate=1 00:26:38.948 rw=write 00:26:38.948 time_based=1 00:26:38.948 runtime=60 00:26:38.948 ioengine=libaio 00:26:38.948 direct=1 00:26:38.948 bs=4096 00:26:38.948 iodepth=1 00:26:38.948 norandommap=0 00:26:38.948 numjobs=1 00:26:38.948 00:26:38.948 verify_dump=1 00:26:38.948 verify_backlog=512 00:26:38.948 verify_state_save=0 00:26:38.948 do_verify=1 00:26:38.948 verify=crc32c-intel 00:26:38.948 [job0] 00:26:38.948 filename=/dev/nvme0n1 00:26:38.948 Could not set queue depth (nvme0n1) 00:26:39.206 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:39.206 fio-3.35 00:26:39.206 Starting 1 thread 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.586 true 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.586 true 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.586 true 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.586 true 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.586 23:51:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:45.112 true 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:45.112 true 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:45.112 true 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:45.112 true 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:45.112 23:51:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 244589 00:27:41.316 00:27:41.316 job0: (groupid=0, jobs=1): err= 0: pid=244729: Tue Nov 19 23:52:13 2024 00:27:41.316 read: IOPS=7, BW=31.3KiB/s (32.0kB/s)(1876KiB/60028msec) 00:27:41.316 slat (usec): min=9, max=5843, avg=36.19, stdev=268.94 00:27:41.316 clat (usec): min=283, max=41175k, avg=127602.68, stdev=1899483.56 00:27:41.316 lat (usec): min=305, max=41176k, avg=127638.87, stdev=1899482.63 00:27:41.316 clat percentiles (usec): 00:27:41.316 | 1.00th=[ 314], 5.00th=[ 40633], 10.00th=[ 41157], 00:27:41.316 | 20.00th=[ 41157], 30.00th=[ 41157], 40.00th=[ 41681], 00:27:41.316 | 50.00th=[ 41681], 60.00th=[ 42206], 70.00th=[ 42206], 00:27:41.316 | 80.00th=[ 42206], 90.00th=[ 42206], 95.00th=[ 42206], 00:27:41.316 | 99.00th=[ 42206], 99.50th=[ 42730], 99.90th=[17112761], 00:27:41.316 | 99.95th=[17112761], 99.99th=[17112761] 00:27:41.316 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60028msec); 0 zone resets 00:27:41.316 slat (usec): min=7, max=30640, avg=83.98, stdev=1353.10 00:27:41.316 clat (usec): min=171, max=412, avg=224.98, stdev=29.92 00:27:41.316 lat (usec): min=182, max=30905, avg=308.95, stdev=1355.21 00:27:41.316 clat percentiles (usec): 00:27:41.316 | 1.00th=[ 180], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 206], 00:27:41.316 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:27:41.316 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 273], 00:27:41.316 | 99.00th=[ 351], 99.50th=[ 400], 99.90th=[ 412], 99.95th=[ 412], 00:27:41.316 | 99.99th=[ 412] 00:27:41.316 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:27:41.316 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:27:41.316 lat (usec) : 250=46.38%, 500=7.65%, 750=0.20% 00:27:41.316 lat (msec) : 50=45.67%, >=2000=0.10% 00:27:41.316 cpu : usr=0.02%, sys=0.06%, ctx=985, majf=0, minf=1 00:27:41.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:41.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.316 issued rwts: total=469,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:41.316 00:27:41.316 Run status group 0 (all jobs): 00:27:41.316 READ: bw=31.3KiB/s (32.0kB/s), 31.3KiB/s-31.3KiB/s (32.0kB/s-32.0kB/s), io=1876KiB (1921kB), run=60028-60028msec 00:27:41.316 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60028-60028msec 00:27:41.316 00:27:41.316 Disk stats (read/write): 00:27:41.316 nvme0n1: ios=517/512, merge=0/0, ticks=19706/111, in_queue=19817, util=100.00% 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:41.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:41.316 nvmf hotplug test: fio successful as expected 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:41.316 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.317 rmmod nvme_tcp 00:27:41.317 rmmod nvme_fabrics 00:27:41.317 rmmod nvme_keyring 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 244282 ']' 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 244282 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 244282 ']' 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 244282 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 244282 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 244282' 00:27:41.317 killing process with pid 244282 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 244282 00:27:41.317 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 244282 00:27:41.317 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:41.317 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:41.317 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:41.317 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:41.317 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:41.317 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:41.317 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:41.317 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:41.317 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:41.317 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.317 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.317 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.883 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:41.883 00:27:41.883 real 1m8.326s 00:27:41.883 user 4m11.516s 00:27:41.883 sys 0m6.069s 00:27:41.884 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:41.884 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:41.884 ************************************ 00:27:41.884 END TEST nvmf_initiator_timeout 00:27:41.884 ************************************ 00:27:41.884 23:52:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:41.884 23:52:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:41.884 23:52:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:41.884 23:52:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:41.884 23:52:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:44.417 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:44.417 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:44.417 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.417 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:44.418 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:44.418 ************************************ 00:27:44.418 START TEST nvmf_perf_adq 00:27:44.418 ************************************ 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:44.418 * Looking for test storage... 00:27:44.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:44.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.418 --rc genhtml_branch_coverage=1 00:27:44.418 --rc genhtml_function_coverage=1 00:27:44.418 --rc genhtml_legend=1 00:27:44.418 --rc geninfo_all_blocks=1 00:27:44.418 --rc geninfo_unexecuted_blocks=1 00:27:44.418 00:27:44.418 ' 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:44.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.418 --rc genhtml_branch_coverage=1 00:27:44.418 --rc genhtml_function_coverage=1 00:27:44.418 --rc genhtml_legend=1 00:27:44.418 --rc geninfo_all_blocks=1 00:27:44.418 --rc geninfo_unexecuted_blocks=1 00:27:44.418 00:27:44.418 ' 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:44.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.418 --rc genhtml_branch_coverage=1 00:27:44.418 --rc genhtml_function_coverage=1 00:27:44.418 --rc genhtml_legend=1 00:27:44.418 --rc geninfo_all_blocks=1 00:27:44.418 --rc geninfo_unexecuted_blocks=1 00:27:44.418 00:27:44.418 ' 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:44.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.418 --rc genhtml_branch_coverage=1 00:27:44.418 --rc genhtml_function_coverage=1 00:27:44.418 --rc genhtml_legend=1 00:27:44.418 --rc geninfo_all_blocks=1 00:27:44.418 --rc geninfo_unexecuted_blocks=1 00:27:44.418 00:27:44.418 ' 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.418 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:44.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:44.419 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:46.324 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:46.324 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:46.324 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:46.324 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:46.325 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.325 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:46.325 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:46.325 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.325 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:46.325 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.325 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:46.325 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:46.325 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:46.325 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:46.325 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:46.894 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:49.424 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:54.805 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:54.806 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:54.806 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:54.806 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:54.806 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:54.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:27:54.806 00:27:54.806 --- 10.0.0.2 ping statistics --- 00:27:54.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.806 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:27:54.806 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:27:54.806 00:27:54.806 --- 10.0.0.1 ping statistics --- 00:27:54.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.806 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=256303 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 256303 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 256303 ']' 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:54.807 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.807 [2024-11-19 23:52:28.751518] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:27:54.807 [2024-11-19 23:52:28.751609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.807 [2024-11-19 23:52:28.841492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:54.807 [2024-11-19 23:52:28.896843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.807 [2024-11-19 23:52:28.896909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.807 [2024-11-19 23:52:28.896926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.807 [2024-11-19 23:52:28.896940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.807 [2024-11-19 23:52:28.896952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.807 [2024-11-19 23:52:28.898713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.807 [2024-11-19 23:52:28.898773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.807 [2024-11-19 23:52:28.898829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.807 [2024-11-19 23:52:28.898832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.807 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.065 [2024-11-19 23:52:29.208396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.065 Malloc1 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.065 [2024-11-19 23:52:29.272176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=256340 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:55.065 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:57.595 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:57.595 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.595 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.595 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.595 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:57.595 "tick_rate": 2700000000, 00:27:57.595 "poll_groups": [ 00:27:57.595 { 00:27:57.595 "name": "nvmf_tgt_poll_group_000", 00:27:57.595 "admin_qpairs": 1, 00:27:57.595 "io_qpairs": 1, 00:27:57.595 "current_admin_qpairs": 1, 00:27:57.595 "current_io_qpairs": 1, 00:27:57.595 "pending_bdev_io": 0, 00:27:57.595 "completed_nvme_io": 18914, 00:27:57.595 "transports": [ 00:27:57.595 { 00:27:57.595 "trtype": "TCP" 00:27:57.595 } 00:27:57.595 ] 00:27:57.595 }, 00:27:57.595 { 00:27:57.595 "name": "nvmf_tgt_poll_group_001", 00:27:57.595 "admin_qpairs": 0, 00:27:57.595 "io_qpairs": 1, 00:27:57.595 "current_admin_qpairs": 0, 00:27:57.595 "current_io_qpairs": 1, 00:27:57.595 "pending_bdev_io": 0, 00:27:57.595 "completed_nvme_io": 19641, 00:27:57.595 "transports": [ 00:27:57.595 { 00:27:57.595 "trtype": "TCP" 00:27:57.595 } 00:27:57.595 ] 00:27:57.595 }, 00:27:57.595 { 00:27:57.595 "name": "nvmf_tgt_poll_group_002", 00:27:57.595 "admin_qpairs": 0, 00:27:57.595 "io_qpairs": 1, 00:27:57.595 "current_admin_qpairs": 0, 00:27:57.595 "current_io_qpairs": 1, 00:27:57.595 "pending_bdev_io": 0, 00:27:57.595 "completed_nvme_io": 19909, 00:27:57.595 "transports": [ 00:27:57.595 { 00:27:57.595 "trtype": "TCP" 00:27:57.595 } 00:27:57.595 ] 00:27:57.595 }, 00:27:57.595 { 00:27:57.595 "name": "nvmf_tgt_poll_group_003", 00:27:57.595 "admin_qpairs": 0, 00:27:57.595 "io_qpairs": 1, 00:27:57.595 "current_admin_qpairs": 0, 00:27:57.595 "current_io_qpairs": 1, 00:27:57.595 "pending_bdev_io": 0, 00:27:57.595 "completed_nvme_io": 19713, 00:27:57.595 "transports": [ 00:27:57.595 { 00:27:57.595 "trtype": "TCP" 00:27:57.595 } 00:27:57.595 ] 00:27:57.595 } 00:27:57.595 ] 00:27:57.595 }' 00:27:57.595 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:57.595 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:57.595 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:57.595 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:57.595 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 256340 00:28:05.705 Initializing NVMe Controllers 00:28:05.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:05.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:05.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:05.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:05.706 Initialization complete. Launching workers. 00:28:05.706 ======================================================== 00:28:05.706 Latency(us) 00:28:05.706 Device Information : IOPS MiB/s Average min max 00:28:05.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10361.40 40.47 6178.85 2517.19 10105.27 00:28:05.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10320.60 40.31 6201.60 2665.76 10537.00 00:28:05.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10516.20 41.08 6087.89 2536.72 11094.42 00:28:05.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10028.30 39.17 6382.99 2483.62 10463.75 00:28:05.706 ======================================================== 00:28:05.706 Total : 41226.49 161.04 6211.00 2483.62 11094.42 00:28:05.706 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.706 rmmod nvme_tcp 00:28:05.706 rmmod nvme_fabrics 00:28:05.706 rmmod nvme_keyring 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 256303 ']' 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 256303 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 256303 ']' 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 256303 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256303 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256303' 00:28:05.706 killing process with pid 256303 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 256303 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 256303 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.706 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.611 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:07.611 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:07.611 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:07.611 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:08.177 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:10.706 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.975 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:15.976 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:15.976 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:15.976 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:15.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:15.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:28:15.976 00:28:15.976 --- 10.0.0.2 ping statistics --- 00:28:15.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.976 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:28:15.976 00:28:15.976 --- 10.0.0.1 ping statistics --- 00:28:15.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.976 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:15.976 net.core.busy_poll = 1 00:28:15.976 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:15.977 net.core.busy_read = 1 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=258970 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 258970 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 258970 ']' 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.977 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.977 [2024-11-19 23:52:49.964702] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:28:15.977 [2024-11-19 23:52:49.964789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.977 [2024-11-19 23:52:50.048320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:15.977 [2024-11-19 23:52:50.099094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.977 [2024-11-19 23:52:50.099161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.977 [2024-11-19 23:52:50.099177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.977 [2024-11-19 23:52:50.099190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.977 [2024-11-19 23:52:50.099202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.977 [2024-11-19 23:52:50.100824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.977 [2024-11-19 23:52:50.100878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.977 [2024-11-19 23:52:50.100989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.977 [2024-11-19 23:52:50.100992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.977 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.237 [2024-11-19 23:52:50.400501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.237 Malloc1 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.237 [2024-11-19 23:52:50.467260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=259114 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:16.237 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:18.231 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:18.231 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.231 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.231 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.231 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:18.231 "tick_rate": 2700000000, 00:28:18.231 "poll_groups": [ 00:28:18.231 { 00:28:18.231 "name": "nvmf_tgt_poll_group_000", 00:28:18.231 "admin_qpairs": 1, 00:28:18.231 "io_qpairs": 0, 00:28:18.231 "current_admin_qpairs": 1, 00:28:18.231 "current_io_qpairs": 0, 00:28:18.231 "pending_bdev_io": 0, 00:28:18.231 "completed_nvme_io": 0, 00:28:18.231 "transports": [ 00:28:18.231 { 00:28:18.231 "trtype": "TCP" 00:28:18.231 } 00:28:18.231 ] 00:28:18.231 }, 00:28:18.231 { 00:28:18.231 "name": "nvmf_tgt_poll_group_001", 00:28:18.231 "admin_qpairs": 0, 00:28:18.231 "io_qpairs": 4, 00:28:18.231 "current_admin_qpairs": 0, 00:28:18.231 "current_io_qpairs": 4, 00:28:18.231 "pending_bdev_io": 0, 00:28:18.231 "completed_nvme_io": 33723, 00:28:18.231 "transports": [ 00:28:18.231 { 00:28:18.231 "trtype": "TCP" 00:28:18.231 } 00:28:18.231 ] 00:28:18.231 }, 00:28:18.231 { 00:28:18.231 "name": "nvmf_tgt_poll_group_002", 00:28:18.231 "admin_qpairs": 0, 00:28:18.231 "io_qpairs": 0, 00:28:18.231 "current_admin_qpairs": 0, 00:28:18.231 "current_io_qpairs": 0, 00:28:18.231 "pending_bdev_io": 0, 00:28:18.231 "completed_nvme_io": 0, 00:28:18.231 "transports": [ 00:28:18.231 { 00:28:18.231 "trtype": "TCP" 00:28:18.231 } 00:28:18.231 ] 00:28:18.231 }, 00:28:18.231 { 00:28:18.231 "name": "nvmf_tgt_poll_group_003", 00:28:18.231 "admin_qpairs": 0, 00:28:18.231 "io_qpairs": 0, 00:28:18.231 "current_admin_qpairs": 0, 00:28:18.231 "current_io_qpairs": 0, 00:28:18.231 "pending_bdev_io": 0, 00:28:18.231 "completed_nvme_io": 0, 00:28:18.231 "transports": [ 00:28:18.231 { 00:28:18.231 "trtype": "TCP" 00:28:18.231 } 00:28:18.231 ] 00:28:18.231 } 00:28:18.231 ] 00:28:18.231 }' 00:28:18.231 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:18.231 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:18.490 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:28:18.490 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:28:18.490 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 259114 00:28:26.611 Initializing NVMe Controllers 00:28:26.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:26.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:26.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:26.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:26.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:26.611 Initialization complete. Launching workers. 00:28:26.611 ======================================================== 00:28:26.611 Latency(us) 00:28:26.611 Device Information : IOPS MiB/s Average min max 00:28:26.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3806.90 14.87 16816.58 1495.78 61921.53 00:28:26.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5007.70 19.56 12781.08 1609.02 62230.49 00:28:26.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4620.80 18.05 13851.46 1881.04 59792.54 00:28:26.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4437.40 17.33 14461.24 1996.56 61618.86 00:28:26.611 ======================================================== 00:28:26.611 Total : 17872.80 69.82 14334.52 1495.78 62230.49 00:28:26.611 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:26.611 rmmod nvme_tcp 00:28:26.611 rmmod nvme_fabrics 00:28:26.611 rmmod nvme_keyring 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 258970 ']' 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 258970 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 258970 ']' 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 258970 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 258970 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 258970' 00:28:26.611 killing process with pid 258970 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 258970 00:28:26.611 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 258970 00:28:26.870 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:26.870 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:26.870 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:26.870 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:26.870 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:26.870 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:26.870 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:26.870 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:26.870 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:26.870 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.870 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.870 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.163 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:30.163 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:30.163 00:28:30.163 real 0m45.809s 00:28:30.163 user 2m37.480s 00:28:30.163 sys 0m11.136s 00:28:30.163 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.163 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.163 ************************************ 00:28:30.163 END TEST nvmf_perf_adq 00:28:30.163 ************************************ 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:30.163 ************************************ 00:28:30.163 START TEST nvmf_shutdown 00:28:30.163 ************************************ 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:30.163 * Looking for test storage... 00:28:30.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:30.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.163 --rc genhtml_branch_coverage=1 00:28:30.163 --rc genhtml_function_coverage=1 00:28:30.163 --rc genhtml_legend=1 00:28:30.163 --rc geninfo_all_blocks=1 00:28:30.163 --rc geninfo_unexecuted_blocks=1 00:28:30.163 00:28:30.163 ' 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:30.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.163 --rc genhtml_branch_coverage=1 00:28:30.163 --rc genhtml_function_coverage=1 00:28:30.163 --rc genhtml_legend=1 00:28:30.163 --rc geninfo_all_blocks=1 00:28:30.163 --rc geninfo_unexecuted_blocks=1 00:28:30.163 00:28:30.163 ' 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:30.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.163 --rc genhtml_branch_coverage=1 00:28:30.163 --rc genhtml_function_coverage=1 00:28:30.163 --rc genhtml_legend=1 00:28:30.163 --rc geninfo_all_blocks=1 00:28:30.163 --rc geninfo_unexecuted_blocks=1 00:28:30.163 00:28:30.163 ' 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:30.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.163 --rc genhtml_branch_coverage=1 00:28:30.163 --rc genhtml_function_coverage=1 00:28:30.163 --rc genhtml_legend=1 00:28:30.163 --rc geninfo_all_blocks=1 00:28:30.163 --rc geninfo_unexecuted_blocks=1 00:28:30.163 00:28:30.163 ' 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.163 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:30.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:30.164 ************************************ 00:28:30.164 START TEST nvmf_shutdown_tc1 00:28:30.164 ************************************ 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:30.164 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:32.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.065 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:32.066 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:32.066 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:32.066 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.066 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:32.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:28:32.325 00:28:32.325 --- 10.0.0.2 ping statistics --- 00:28:32.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.325 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:28:32.325 00:28:32.325 --- 10.0.0.1 ping statistics --- 00:28:32.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.325 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=262417 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 262417 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 262417 ']' 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.325 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:32.325 [2024-11-19 23:53:06.502625] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:28:32.325 [2024-11-19 23:53:06.502713] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.325 [2024-11-19 23:53:06.584043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:32.584 [2024-11-19 23:53:06.636485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.584 [2024-11-19 23:53:06.636543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.584 [2024-11-19 23:53:06.636568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.584 [2024-11-19 23:53:06.636590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.584 [2024-11-19 23:53:06.636603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.584 [2024-11-19 23:53:06.638349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.584 [2024-11-19 23:53:06.638467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.584 [2024-11-19 23:53:06.638529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:32.584 [2024-11-19 23:53:06.638532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:32.584 [2024-11-19 23:53:06.790978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.584 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:32.584 Malloc1 00:28:32.584 [2024-11-19 23:53:06.874765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.843 Malloc2 00:28:32.843 Malloc3 00:28:32.843 Malloc4 00:28:32.843 Malloc5 00:28:32.843 Malloc6 00:28:32.843 Malloc7 00:28:33.103 Malloc8 00:28:33.103 Malloc9 00:28:33.103 Malloc10 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=262588 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 262588 /var/tmp/bdevperf.sock 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 262588 ']' 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:33.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.103 { 00:28:33.103 "params": { 00:28:33.103 "name": "Nvme$subsystem", 00:28:33.103 "trtype": "$TEST_TRANSPORT", 00:28:33.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.103 "adrfam": "ipv4", 00:28:33.103 "trsvcid": "$NVMF_PORT", 00:28:33.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.103 "hdgst": ${hdgst:-false}, 00:28:33.103 "ddgst": ${ddgst:-false} 00:28:33.103 }, 00:28:33.103 "method": "bdev_nvme_attach_controller" 00:28:33.103 } 00:28:33.103 EOF 00:28:33.103 )") 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.103 { 00:28:33.103 "params": { 00:28:33.103 "name": "Nvme$subsystem", 00:28:33.103 "trtype": "$TEST_TRANSPORT", 00:28:33.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.103 "adrfam": "ipv4", 00:28:33.103 "trsvcid": "$NVMF_PORT", 00:28:33.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.103 "hdgst": ${hdgst:-false}, 00:28:33.103 "ddgst": ${ddgst:-false} 00:28:33.103 }, 00:28:33.103 "method": "bdev_nvme_attach_controller" 00:28:33.103 } 00:28:33.103 EOF 00:28:33.103 )") 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.103 { 00:28:33.103 "params": { 00:28:33.103 "name": "Nvme$subsystem", 00:28:33.103 "trtype": "$TEST_TRANSPORT", 00:28:33.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.103 "adrfam": "ipv4", 00:28:33.103 "trsvcid": "$NVMF_PORT", 00:28:33.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.103 "hdgst": ${hdgst:-false}, 00:28:33.103 "ddgst": ${ddgst:-false} 00:28:33.103 }, 00:28:33.103 "method": "bdev_nvme_attach_controller" 00:28:33.103 } 00:28:33.103 EOF 00:28:33.103 )") 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.103 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.103 { 00:28:33.103 "params": { 00:28:33.103 "name": "Nvme$subsystem", 00:28:33.103 "trtype": "$TEST_TRANSPORT", 00:28:33.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.103 "adrfam": "ipv4", 00:28:33.103 "trsvcid": "$NVMF_PORT", 00:28:33.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.103 "hdgst": ${hdgst:-false}, 00:28:33.104 "ddgst": ${ddgst:-false} 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 } 00:28:33.104 EOF 00:28:33.104 )") 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.104 { 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme$subsystem", 00:28:33.104 "trtype": "$TEST_TRANSPORT", 00:28:33.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "$NVMF_PORT", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.104 "hdgst": ${hdgst:-false}, 00:28:33.104 "ddgst": ${ddgst:-false} 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 } 00:28:33.104 EOF 00:28:33.104 )") 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.104 { 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme$subsystem", 00:28:33.104 "trtype": "$TEST_TRANSPORT", 00:28:33.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "$NVMF_PORT", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.104 "hdgst": ${hdgst:-false}, 00:28:33.104 "ddgst": ${ddgst:-false} 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 } 00:28:33.104 EOF 00:28:33.104 )") 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.104 { 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme$subsystem", 00:28:33.104 "trtype": "$TEST_TRANSPORT", 00:28:33.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "$NVMF_PORT", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.104 "hdgst": ${hdgst:-false}, 00:28:33.104 "ddgst": ${ddgst:-false} 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 } 00:28:33.104 EOF 00:28:33.104 )") 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.104 { 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme$subsystem", 00:28:33.104 "trtype": "$TEST_TRANSPORT", 00:28:33.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "$NVMF_PORT", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.104 "hdgst": ${hdgst:-false}, 00:28:33.104 "ddgst": ${ddgst:-false} 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 } 00:28:33.104 EOF 00:28:33.104 )") 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.104 { 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme$subsystem", 00:28:33.104 "trtype": "$TEST_TRANSPORT", 00:28:33.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "$NVMF_PORT", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.104 "hdgst": ${hdgst:-false}, 00:28:33.104 "ddgst": ${ddgst:-false} 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 } 00:28:33.104 EOF 00:28:33.104 )") 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:33.104 { 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme$subsystem", 00:28:33.104 "trtype": "$TEST_TRANSPORT", 00:28:33.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "$NVMF_PORT", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.104 "hdgst": ${hdgst:-false}, 00:28:33.104 "ddgst": ${ddgst:-false} 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 } 00:28:33.104 EOF 00:28:33.104 )") 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:33.104 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme1", 00:28:33.104 "trtype": "tcp", 00:28:33.104 "traddr": "10.0.0.2", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "4420", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:33.104 "hdgst": false, 00:28:33.104 "ddgst": false 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 },{ 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme2", 00:28:33.104 "trtype": "tcp", 00:28:33.104 "traddr": "10.0.0.2", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "4420", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:33.104 "hdgst": false, 00:28:33.104 "ddgst": false 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 },{ 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme3", 00:28:33.104 "trtype": "tcp", 00:28:33.104 "traddr": "10.0.0.2", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "4420", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:33.104 "hdgst": false, 00:28:33.104 "ddgst": false 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 },{ 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme4", 00:28:33.104 "trtype": "tcp", 00:28:33.104 "traddr": "10.0.0.2", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "4420", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:33.104 "hdgst": false, 00:28:33.104 "ddgst": false 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 },{ 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme5", 00:28:33.104 "trtype": "tcp", 00:28:33.104 "traddr": "10.0.0.2", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "4420", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:33.104 "hdgst": false, 00:28:33.104 "ddgst": false 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 },{ 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme6", 00:28:33.104 "trtype": "tcp", 00:28:33.104 "traddr": "10.0.0.2", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "4420", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:33.104 "hdgst": false, 00:28:33.104 "ddgst": false 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 },{ 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme7", 00:28:33.104 "trtype": "tcp", 00:28:33.104 "traddr": "10.0.0.2", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "4420", 00:28:33.104 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:33.104 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:33.104 "hdgst": false, 00:28:33.104 "ddgst": false 00:28:33.104 }, 00:28:33.104 "method": "bdev_nvme_attach_controller" 00:28:33.104 },{ 00:28:33.104 "params": { 00:28:33.104 "name": "Nvme8", 00:28:33.104 "trtype": "tcp", 00:28:33.104 "traddr": "10.0.0.2", 00:28:33.104 "adrfam": "ipv4", 00:28:33.104 "trsvcid": "4420", 00:28:33.105 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:33.105 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:33.105 "hdgst": false, 00:28:33.105 "ddgst": false 00:28:33.105 }, 00:28:33.105 "method": "bdev_nvme_attach_controller" 00:28:33.105 },{ 00:28:33.105 "params": { 00:28:33.105 "name": "Nvme9", 00:28:33.105 "trtype": "tcp", 00:28:33.105 "traddr": "10.0.0.2", 00:28:33.105 "adrfam": "ipv4", 00:28:33.105 "trsvcid": "4420", 00:28:33.105 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:33.105 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:33.105 "hdgst": false, 00:28:33.105 "ddgst": false 00:28:33.105 }, 00:28:33.105 "method": "bdev_nvme_attach_controller" 00:28:33.105 },{ 00:28:33.105 "params": { 00:28:33.105 "name": "Nvme10", 00:28:33.105 "trtype": "tcp", 00:28:33.105 "traddr": "10.0.0.2", 00:28:33.105 "adrfam": "ipv4", 00:28:33.105 "trsvcid": "4420", 00:28:33.105 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:33.105 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:33.105 "hdgst": false, 00:28:33.105 "ddgst": false 00:28:33.105 }, 00:28:33.105 "method": "bdev_nvme_attach_controller" 00:28:33.105 }' 00:28:33.105 [2024-11-19 23:53:07.387201] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:28:33.105 [2024-11-19 23:53:07.387280] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:33.363 [2024-11-19 23:53:07.459825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.363 [2024-11-19 23:53:07.506837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.271 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.271 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:35.271 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:35.271 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.271 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:35.271 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.271 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 262588 00:28:35.271 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:35.271 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:36.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 262588 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:36.208 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 262417 00:28:36.208 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:36.208 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:36.208 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:36.208 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:36.208 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.208 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.208 { 00:28:36.208 "params": { 00:28:36.208 "name": "Nvme$subsystem", 00:28:36.208 "trtype": "$TEST_TRANSPORT", 00:28:36.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.209 "adrfam": "ipv4", 00:28:36.209 "trsvcid": "$NVMF_PORT", 00:28:36.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.209 "hdgst": ${hdgst:-false}, 00:28:36.209 "ddgst": ${ddgst:-false} 00:28:36.209 }, 00:28:36.209 "method": "bdev_nvme_attach_controller" 00:28:36.209 } 00:28:36.209 EOF 00:28:36.209 )") 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.209 { 00:28:36.209 "params": { 00:28:36.209 "name": "Nvme$subsystem", 00:28:36.209 "trtype": "$TEST_TRANSPORT", 00:28:36.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.209 "adrfam": "ipv4", 00:28:36.209 "trsvcid": "$NVMF_PORT", 00:28:36.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.209 "hdgst": ${hdgst:-false}, 00:28:36.209 "ddgst": ${ddgst:-false} 00:28:36.209 }, 00:28:36.209 "method": "bdev_nvme_attach_controller" 00:28:36.209 } 00:28:36.209 EOF 00:28:36.209 )") 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.209 { 00:28:36.209 "params": { 00:28:36.209 "name": "Nvme$subsystem", 00:28:36.209 "trtype": "$TEST_TRANSPORT", 00:28:36.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.209 "adrfam": "ipv4", 00:28:36.209 "trsvcid": "$NVMF_PORT", 00:28:36.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.209 "hdgst": ${hdgst:-false}, 00:28:36.209 "ddgst": ${ddgst:-false} 00:28:36.209 }, 00:28:36.209 "method": "bdev_nvme_attach_controller" 00:28:36.209 } 00:28:36.209 EOF 00:28:36.209 )") 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.209 { 00:28:36.209 "params": { 00:28:36.209 "name": "Nvme$subsystem", 00:28:36.209 "trtype": "$TEST_TRANSPORT", 00:28:36.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.209 "adrfam": "ipv4", 00:28:36.209 "trsvcid": "$NVMF_PORT", 00:28:36.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.209 "hdgst": ${hdgst:-false}, 00:28:36.209 "ddgst": ${ddgst:-false} 00:28:36.209 }, 00:28:36.209 "method": "bdev_nvme_attach_controller" 00:28:36.209 } 00:28:36.209 EOF 00:28:36.209 )") 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.209 { 00:28:36.209 "params": { 00:28:36.209 "name": "Nvme$subsystem", 00:28:36.209 "trtype": "$TEST_TRANSPORT", 00:28:36.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.209 "adrfam": "ipv4", 00:28:36.209 "trsvcid": "$NVMF_PORT", 00:28:36.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.209 "hdgst": ${hdgst:-false}, 00:28:36.209 "ddgst": ${ddgst:-false} 00:28:36.209 }, 00:28:36.209 "method": "bdev_nvme_attach_controller" 00:28:36.209 } 00:28:36.209 EOF 00:28:36.209 )") 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.209 { 00:28:36.209 "params": { 00:28:36.209 "name": "Nvme$subsystem", 00:28:36.209 "trtype": "$TEST_TRANSPORT", 00:28:36.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.209 "adrfam": "ipv4", 00:28:36.209 "trsvcid": "$NVMF_PORT", 00:28:36.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.209 "hdgst": ${hdgst:-false}, 00:28:36.209 "ddgst": ${ddgst:-false} 00:28:36.209 }, 00:28:36.209 "method": "bdev_nvme_attach_controller" 00:28:36.209 } 00:28:36.209 EOF 00:28:36.209 )") 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.209 { 00:28:36.209 "params": { 00:28:36.209 "name": "Nvme$subsystem", 00:28:36.209 "trtype": "$TEST_TRANSPORT", 00:28:36.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.209 "adrfam": "ipv4", 00:28:36.209 "trsvcid": "$NVMF_PORT", 00:28:36.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.209 "hdgst": ${hdgst:-false}, 00:28:36.209 "ddgst": ${ddgst:-false} 00:28:36.209 }, 00:28:36.209 "method": "bdev_nvme_attach_controller" 00:28:36.209 } 00:28:36.209 EOF 00:28:36.209 )") 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.209 { 00:28:36.209 "params": { 00:28:36.209 "name": "Nvme$subsystem", 00:28:36.209 "trtype": "$TEST_TRANSPORT", 00:28:36.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.209 "adrfam": "ipv4", 00:28:36.209 "trsvcid": "$NVMF_PORT", 00:28:36.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.209 "hdgst": ${hdgst:-false}, 00:28:36.209 "ddgst": ${ddgst:-false} 00:28:36.209 }, 00:28:36.209 "method": "bdev_nvme_attach_controller" 00:28:36.209 } 00:28:36.209 EOF 00:28:36.209 )") 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.209 { 00:28:36.209 "params": { 00:28:36.209 "name": "Nvme$subsystem", 00:28:36.209 "trtype": "$TEST_TRANSPORT", 00:28:36.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.209 "adrfam": "ipv4", 00:28:36.209 "trsvcid": "$NVMF_PORT", 00:28:36.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.209 "hdgst": ${hdgst:-false}, 00:28:36.209 "ddgst": ${ddgst:-false} 00:28:36.209 }, 00:28:36.209 "method": "bdev_nvme_attach_controller" 00:28:36.209 } 00:28:36.209 EOF 00:28:36.209 )") 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.209 { 00:28:36.209 "params": { 00:28:36.209 "name": "Nvme$subsystem", 00:28:36.209 "trtype": "$TEST_TRANSPORT", 00:28:36.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.209 "adrfam": "ipv4", 00:28:36.209 "trsvcid": "$NVMF_PORT", 00:28:36.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.209 "hdgst": ${hdgst:-false}, 00:28:36.209 "ddgst": ${ddgst:-false} 00:28:36.209 }, 00:28:36.209 "method": "bdev_nvme_attach_controller" 00:28:36.209 } 00:28:36.209 EOF 00:28:36.209 )") 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:36.209 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:36.209 "params": { 00:28:36.209 "name": "Nvme1", 00:28:36.209 "trtype": "tcp", 00:28:36.209 "traddr": "10.0.0.2", 00:28:36.209 "adrfam": "ipv4", 00:28:36.209 "trsvcid": "4420", 00:28:36.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.209 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.209 "hdgst": false, 00:28:36.210 "ddgst": false 00:28:36.210 }, 00:28:36.210 "method": "bdev_nvme_attach_controller" 00:28:36.210 },{ 00:28:36.210 "params": { 00:28:36.210 "name": "Nvme2", 00:28:36.210 "trtype": "tcp", 00:28:36.210 "traddr": "10.0.0.2", 00:28:36.210 "adrfam": "ipv4", 00:28:36.210 "trsvcid": "4420", 00:28:36.210 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:36.210 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:36.210 "hdgst": false, 00:28:36.210 "ddgst": false 00:28:36.210 }, 00:28:36.210 "method": "bdev_nvme_attach_controller" 00:28:36.210 },{ 00:28:36.210 "params": { 00:28:36.210 "name": "Nvme3", 00:28:36.210 "trtype": "tcp", 00:28:36.210 "traddr": "10.0.0.2", 00:28:36.210 "adrfam": "ipv4", 00:28:36.210 "trsvcid": "4420", 00:28:36.210 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:36.210 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:36.210 "hdgst": false, 00:28:36.210 "ddgst": false 00:28:36.210 }, 00:28:36.210 "method": "bdev_nvme_attach_controller" 00:28:36.210 },{ 00:28:36.210 "params": { 00:28:36.210 "name": "Nvme4", 00:28:36.210 "trtype": "tcp", 00:28:36.210 "traddr": "10.0.0.2", 00:28:36.210 "adrfam": "ipv4", 00:28:36.210 "trsvcid": "4420", 00:28:36.210 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:36.210 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:36.210 "hdgst": false, 00:28:36.210 "ddgst": false 00:28:36.210 }, 00:28:36.210 "method": "bdev_nvme_attach_controller" 00:28:36.210 },{ 00:28:36.210 "params": { 00:28:36.210 "name": "Nvme5", 00:28:36.210 "trtype": "tcp", 00:28:36.210 "traddr": "10.0.0.2", 00:28:36.210 "adrfam": "ipv4", 00:28:36.210 "trsvcid": "4420", 00:28:36.210 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:36.210 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:36.210 "hdgst": false, 00:28:36.210 "ddgst": false 00:28:36.210 }, 00:28:36.210 "method": "bdev_nvme_attach_controller" 00:28:36.210 },{ 00:28:36.210 "params": { 00:28:36.210 "name": "Nvme6", 00:28:36.210 "trtype": "tcp", 00:28:36.210 "traddr": "10.0.0.2", 00:28:36.210 "adrfam": "ipv4", 00:28:36.210 "trsvcid": "4420", 00:28:36.210 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:36.210 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:36.210 "hdgst": false, 00:28:36.210 "ddgst": false 00:28:36.210 }, 00:28:36.210 "method": "bdev_nvme_attach_controller" 00:28:36.210 },{ 00:28:36.210 "params": { 00:28:36.210 "name": "Nvme7", 00:28:36.210 "trtype": "tcp", 00:28:36.210 "traddr": "10.0.0.2", 00:28:36.210 "adrfam": "ipv4", 00:28:36.210 "trsvcid": "4420", 00:28:36.210 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:36.210 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:36.210 "hdgst": false, 00:28:36.210 "ddgst": false 00:28:36.210 }, 00:28:36.210 "method": "bdev_nvme_attach_controller" 00:28:36.210 },{ 00:28:36.210 "params": { 00:28:36.210 "name": "Nvme8", 00:28:36.210 "trtype": "tcp", 00:28:36.210 "traddr": "10.0.0.2", 00:28:36.210 "adrfam": "ipv4", 00:28:36.210 "trsvcid": "4420", 00:28:36.210 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:36.210 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:36.210 "hdgst": false, 00:28:36.210 "ddgst": false 00:28:36.210 }, 00:28:36.210 "method": "bdev_nvme_attach_controller" 00:28:36.210 },{ 00:28:36.210 "params": { 00:28:36.210 "name": "Nvme9", 00:28:36.210 "trtype": "tcp", 00:28:36.210 "traddr": "10.0.0.2", 00:28:36.210 "adrfam": "ipv4", 00:28:36.210 "trsvcid": "4420", 00:28:36.210 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:36.210 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:36.210 "hdgst": false, 00:28:36.210 "ddgst": false 00:28:36.210 }, 00:28:36.210 "method": "bdev_nvme_attach_controller" 00:28:36.210 },{ 00:28:36.210 "params": { 00:28:36.210 "name": "Nvme10", 00:28:36.210 "trtype": "tcp", 00:28:36.210 "traddr": "10.0.0.2", 00:28:36.210 "adrfam": "ipv4", 00:28:36.210 "trsvcid": "4420", 00:28:36.210 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:36.210 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:36.210 "hdgst": false, 00:28:36.210 "ddgst": false 00:28:36.210 }, 00:28:36.210 "method": "bdev_nvme_attach_controller" 00:28:36.210 }' 00:28:36.210 [2024-11-19 23:53:10.498661] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:28:36.210 [2024-11-19 23:53:10.498751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262915 ] 00:28:36.468 [2024-11-19 23:53:10.575909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.468 [2024-11-19 23:53:10.623583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.849 Running I/O for 1 seconds... 00:28:39.045 1815.00 IOPS, 113.44 MiB/s 00:28:39.045 Latency(us) 00:28:39.045 [2024-11-19T22:53:13.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.045 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.045 Verification LBA range: start 0x0 length 0x400 00:28:39.045 Nvme1n1 : 1.14 224.20 14.01 0.00 0.00 282739.86 23884.23 281173.71 00:28:39.045 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.045 Verification LBA range: start 0x0 length 0x400 00:28:39.045 Nvme2n1 : 1.15 222.00 13.88 0.00 0.00 281073.40 21359.88 262532.36 00:28:39.045 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.045 Verification LBA range: start 0x0 length 0x400 00:28:39.045 Nvme3n1 : 1.12 228.98 14.31 0.00 0.00 267784.91 16505.36 270299.59 00:28:39.045 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.045 Verification LBA range: start 0x0 length 0x400 00:28:39.045 Nvme4n1 : 1.18 271.89 16.99 0.00 0.00 221318.11 23398.78 245444.46 00:28:39.045 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.045 Verification LBA range: start 0x0 length 0x400 00:28:39.045 Nvme5n1 : 1.19 215.65 13.48 0.00 0.00 276252.63 22039.51 290494.39 00:28:39.045 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.045 Verification LBA range: start 0x0 length 0x400 00:28:39.045 Nvme6n1 : 1.13 237.02 14.81 0.00 0.00 243849.47 3616.62 234570.33 00:28:39.045 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.045 Verification LBA range: start 0x0 length 0x400 00:28:39.045 Nvme7n1 : 1.15 222.51 13.91 0.00 0.00 258433.90 18835.53 260978.92 00:28:39.045 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.045 Verification LBA range: start 0x0 length 0x400 00:28:39.045 Nvme8n1 : 1.14 228.25 14.27 0.00 0.00 246385.49 3495.25 246997.90 00:28:39.045 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.045 Verification LBA range: start 0x0 length 0x400 00:28:39.045 Nvme9n1 : 1.19 217.94 13.62 0.00 0.00 256329.13 1711.22 290494.39 00:28:39.045 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.045 Verification LBA range: start 0x0 length 0x400 00:28:39.045 Nvme10n1 : 1.20 266.56 16.66 0.00 0.00 206116.18 5315.70 264085.81 00:28:39.045 [2024-11-19T22:53:13.357Z] =================================================================================================================== 00:28:39.045 [2024-11-19T22:53:13.357Z] Total : 2335.01 145.94 0.00 0.00 252072.95 1711.22 290494.39 00:28:39.305 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:39.305 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:39.305 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:39.306 rmmod nvme_tcp 00:28:39.306 rmmod nvme_fabrics 00:28:39.306 rmmod nvme_keyring 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 262417 ']' 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 262417 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 262417 ']' 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 262417 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 262417 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 262417' 00:28:39.306 killing process with pid 262417 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 262417 00:28:39.306 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 262417 00:28:39.873 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:39.873 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:39.873 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:39.873 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:39.873 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:39.873 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:39.873 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:39.873 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:39.873 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:39.873 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.873 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.873 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.778 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:41.778 00:28:41.778 real 0m11.810s 00:28:41.778 user 0m34.150s 00:28:41.778 sys 0m3.288s 00:28:41.778 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.778 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:41.778 ************************************ 00:28:41.778 END TEST nvmf_shutdown_tc1 00:28:41.778 ************************************ 00:28:41.778 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:41.778 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:41.778 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:41.778 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:42.038 ************************************ 00:28:42.038 START TEST nvmf_shutdown_tc2 00:28:42.038 ************************************ 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.038 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:42.039 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:42.039 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:42.039 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:42.039 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.039 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:42.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:28:42.298 00:28:42.298 --- 10.0.0.2 ping statistics --- 00:28:42.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.298 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:28:42.298 00:28:42.298 --- 10.0.0.1 ping statistics --- 00:28:42.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.298 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=263775 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 263775 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 263775 ']' 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.298 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.298 [2024-11-19 23:53:16.438101] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:28:42.298 [2024-11-19 23:53:16.438175] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.298 [2024-11-19 23:53:16.508170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.298 [2024-11-19 23:53:16.555765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.298 [2024-11-19 23:53:16.555813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.298 [2024-11-19 23:53:16.555836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.298 [2024-11-19 23:53:16.555847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.298 [2024-11-19 23:53:16.555857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.298 [2024-11-19 23:53:16.557329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.298 [2024-11-19 23:53:16.557369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.298 [2024-11-19 23:53:16.557463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.298 [2024-11-19 23:53:16.557460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.557 [2024-11-19 23:53:16.704875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.557 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.558 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.558 Malloc1 00:28:42.558 [2024-11-19 23:53:16.806193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.558 Malloc2 00:28:42.817 Malloc3 00:28:42.817 Malloc4 00:28:42.817 Malloc5 00:28:42.817 Malloc6 00:28:42.817 Malloc7 00:28:43.077 Malloc8 00:28:43.077 Malloc9 00:28:43.077 Malloc10 00:28:43.077 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.077 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:43.077 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.077 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.077 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=263847 00:28:43.077 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 263847 /var/tmp/bdevperf.sock 00:28:43.077 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 263847 ']' 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:43.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.078 { 00:28:43.078 "params": { 00:28:43.078 "name": "Nvme$subsystem", 00:28:43.078 "trtype": "$TEST_TRANSPORT", 00:28:43.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.078 "adrfam": "ipv4", 00:28:43.078 "trsvcid": "$NVMF_PORT", 00:28:43.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.078 "hdgst": ${hdgst:-false}, 00:28:43.078 "ddgst": ${ddgst:-false} 00:28:43.078 }, 00:28:43.078 "method": "bdev_nvme_attach_controller" 00:28:43.078 } 00:28:43.078 EOF 00:28:43.078 )") 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.078 { 00:28:43.078 "params": { 00:28:43.078 "name": "Nvme$subsystem", 00:28:43.078 "trtype": "$TEST_TRANSPORT", 00:28:43.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.078 "adrfam": "ipv4", 00:28:43.078 "trsvcid": "$NVMF_PORT", 00:28:43.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.078 "hdgst": ${hdgst:-false}, 00:28:43.078 "ddgst": ${ddgst:-false} 00:28:43.078 }, 00:28:43.078 "method": "bdev_nvme_attach_controller" 00:28:43.078 } 00:28:43.078 EOF 00:28:43.078 )") 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.078 { 00:28:43.078 "params": { 00:28:43.078 "name": "Nvme$subsystem", 00:28:43.078 "trtype": "$TEST_TRANSPORT", 00:28:43.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.078 "adrfam": "ipv4", 00:28:43.078 "trsvcid": "$NVMF_PORT", 00:28:43.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.078 "hdgst": ${hdgst:-false}, 00:28:43.078 "ddgst": ${ddgst:-false} 00:28:43.078 }, 00:28:43.078 "method": "bdev_nvme_attach_controller" 00:28:43.078 } 00:28:43.078 EOF 00:28:43.078 )") 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.078 { 00:28:43.078 "params": { 00:28:43.078 "name": "Nvme$subsystem", 00:28:43.078 "trtype": "$TEST_TRANSPORT", 00:28:43.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.078 "adrfam": "ipv4", 00:28:43.078 "trsvcid": "$NVMF_PORT", 00:28:43.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.078 "hdgst": ${hdgst:-false}, 00:28:43.078 "ddgst": ${ddgst:-false} 00:28:43.078 }, 00:28:43.078 "method": "bdev_nvme_attach_controller" 00:28:43.078 } 00:28:43.078 EOF 00:28:43.078 )") 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.078 { 00:28:43.078 "params": { 00:28:43.078 "name": "Nvme$subsystem", 00:28:43.078 "trtype": "$TEST_TRANSPORT", 00:28:43.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.078 "adrfam": "ipv4", 00:28:43.078 "trsvcid": "$NVMF_PORT", 00:28:43.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.078 "hdgst": ${hdgst:-false}, 00:28:43.078 "ddgst": ${ddgst:-false} 00:28:43.078 }, 00:28:43.078 "method": "bdev_nvme_attach_controller" 00:28:43.078 } 00:28:43.078 EOF 00:28:43.078 )") 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.078 { 00:28:43.078 "params": { 00:28:43.078 "name": "Nvme$subsystem", 00:28:43.078 "trtype": "$TEST_TRANSPORT", 00:28:43.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.078 "adrfam": "ipv4", 00:28:43.078 "trsvcid": "$NVMF_PORT", 00:28:43.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.078 "hdgst": ${hdgst:-false}, 00:28:43.078 "ddgst": ${ddgst:-false} 00:28:43.078 }, 00:28:43.078 "method": "bdev_nvme_attach_controller" 00:28:43.078 } 00:28:43.078 EOF 00:28:43.078 )") 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.078 { 00:28:43.078 "params": { 00:28:43.078 "name": "Nvme$subsystem", 00:28:43.078 "trtype": "$TEST_TRANSPORT", 00:28:43.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.078 "adrfam": "ipv4", 00:28:43.078 "trsvcid": "$NVMF_PORT", 00:28:43.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.078 "hdgst": ${hdgst:-false}, 00:28:43.078 "ddgst": ${ddgst:-false} 00:28:43.078 }, 00:28:43.078 "method": "bdev_nvme_attach_controller" 00:28:43.078 } 00:28:43.078 EOF 00:28:43.078 )") 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.078 { 00:28:43.078 "params": { 00:28:43.078 "name": "Nvme$subsystem", 00:28:43.078 "trtype": "$TEST_TRANSPORT", 00:28:43.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.078 "adrfam": "ipv4", 00:28:43.078 "trsvcid": "$NVMF_PORT", 00:28:43.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.078 "hdgst": ${hdgst:-false}, 00:28:43.078 "ddgst": ${ddgst:-false} 00:28:43.078 }, 00:28:43.078 "method": "bdev_nvme_attach_controller" 00:28:43.078 } 00:28:43.078 EOF 00:28:43.078 )") 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.078 { 00:28:43.078 "params": { 00:28:43.078 "name": "Nvme$subsystem", 00:28:43.078 "trtype": "$TEST_TRANSPORT", 00:28:43.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.078 "adrfam": "ipv4", 00:28:43.078 "trsvcid": "$NVMF_PORT", 00:28:43.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.078 "hdgst": ${hdgst:-false}, 00:28:43.078 "ddgst": ${ddgst:-false} 00:28:43.078 }, 00:28:43.078 "method": "bdev_nvme_attach_controller" 00:28:43.078 } 00:28:43.078 EOF 00:28:43.078 )") 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.078 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.078 { 00:28:43.078 "params": { 00:28:43.078 "name": "Nvme$subsystem", 00:28:43.078 "trtype": "$TEST_TRANSPORT", 00:28:43.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.079 "adrfam": "ipv4", 00:28:43.079 "trsvcid": "$NVMF_PORT", 00:28:43.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.079 "hdgst": ${hdgst:-false}, 00:28:43.079 "ddgst": ${ddgst:-false} 00:28:43.079 }, 00:28:43.079 "method": "bdev_nvme_attach_controller" 00:28:43.079 } 00:28:43.079 EOF 00:28:43.079 )") 00:28:43.079 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.079 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:43.079 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:43.079 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:43.079 "params": { 00:28:43.079 "name": "Nvme1", 00:28:43.079 "trtype": "tcp", 00:28:43.079 "traddr": "10.0.0.2", 00:28:43.079 "adrfam": "ipv4", 00:28:43.079 "trsvcid": "4420", 00:28:43.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:43.079 "hdgst": false, 00:28:43.079 "ddgst": false 00:28:43.079 }, 00:28:43.079 "method": "bdev_nvme_attach_controller" 00:28:43.079 },{ 00:28:43.079 "params": { 00:28:43.079 "name": "Nvme2", 00:28:43.079 "trtype": "tcp", 00:28:43.079 "traddr": "10.0.0.2", 00:28:43.079 "adrfam": "ipv4", 00:28:43.079 "trsvcid": "4420", 00:28:43.079 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:43.079 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:43.079 "hdgst": false, 00:28:43.079 "ddgst": false 00:28:43.079 }, 00:28:43.079 "method": "bdev_nvme_attach_controller" 00:28:43.079 },{ 00:28:43.079 "params": { 00:28:43.079 "name": "Nvme3", 00:28:43.079 "trtype": "tcp", 00:28:43.079 "traddr": "10.0.0.2", 00:28:43.079 "adrfam": "ipv4", 00:28:43.079 "trsvcid": "4420", 00:28:43.079 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:43.079 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:43.079 "hdgst": false, 00:28:43.079 "ddgst": false 00:28:43.079 }, 00:28:43.079 "method": "bdev_nvme_attach_controller" 00:28:43.079 },{ 00:28:43.079 "params": { 00:28:43.079 "name": "Nvme4", 00:28:43.079 "trtype": "tcp", 00:28:43.079 "traddr": "10.0.0.2", 00:28:43.079 "adrfam": "ipv4", 00:28:43.079 "trsvcid": "4420", 00:28:43.079 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:43.079 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:43.079 "hdgst": false, 00:28:43.079 "ddgst": false 00:28:43.079 }, 00:28:43.079 "method": "bdev_nvme_attach_controller" 00:28:43.079 },{ 00:28:43.079 "params": { 00:28:43.079 "name": "Nvme5", 00:28:43.079 "trtype": "tcp", 00:28:43.079 "traddr": "10.0.0.2", 00:28:43.079 "adrfam": "ipv4", 00:28:43.079 "trsvcid": "4420", 00:28:43.079 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:43.079 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:43.079 "hdgst": false, 00:28:43.079 "ddgst": false 00:28:43.079 }, 00:28:43.079 "method": "bdev_nvme_attach_controller" 00:28:43.079 },{ 00:28:43.079 "params": { 00:28:43.079 "name": "Nvme6", 00:28:43.079 "trtype": "tcp", 00:28:43.079 "traddr": "10.0.0.2", 00:28:43.079 "adrfam": "ipv4", 00:28:43.079 "trsvcid": "4420", 00:28:43.079 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:43.079 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:43.079 "hdgst": false, 00:28:43.079 "ddgst": false 00:28:43.079 }, 00:28:43.079 "method": "bdev_nvme_attach_controller" 00:28:43.079 },{ 00:28:43.079 "params": { 00:28:43.079 "name": "Nvme7", 00:28:43.079 "trtype": "tcp", 00:28:43.079 "traddr": "10.0.0.2", 00:28:43.079 "adrfam": "ipv4", 00:28:43.079 "trsvcid": "4420", 00:28:43.079 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:43.079 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:43.079 "hdgst": false, 00:28:43.079 "ddgst": false 00:28:43.079 }, 00:28:43.079 "method": "bdev_nvme_attach_controller" 00:28:43.079 },{ 00:28:43.079 "params": { 00:28:43.079 "name": "Nvme8", 00:28:43.079 "trtype": "tcp", 00:28:43.079 "traddr": "10.0.0.2", 00:28:43.079 "adrfam": "ipv4", 00:28:43.079 "trsvcid": "4420", 00:28:43.079 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:43.079 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:43.079 "hdgst": false, 00:28:43.079 "ddgst": false 00:28:43.079 }, 00:28:43.079 "method": "bdev_nvme_attach_controller" 00:28:43.079 },{ 00:28:43.079 "params": { 00:28:43.079 "name": "Nvme9", 00:28:43.079 "trtype": "tcp", 00:28:43.079 "traddr": "10.0.0.2", 00:28:43.079 "adrfam": "ipv4", 00:28:43.079 "trsvcid": "4420", 00:28:43.079 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:43.079 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:43.079 "hdgst": false, 00:28:43.079 "ddgst": false 00:28:43.079 }, 00:28:43.079 "method": "bdev_nvme_attach_controller" 00:28:43.079 },{ 00:28:43.079 "params": { 00:28:43.079 "name": "Nvme10", 00:28:43.079 "trtype": "tcp", 00:28:43.079 "traddr": "10.0.0.2", 00:28:43.079 "adrfam": "ipv4", 00:28:43.079 "trsvcid": "4420", 00:28:43.079 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:43.079 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:43.079 "hdgst": false, 00:28:43.079 "ddgst": false 00:28:43.079 }, 00:28:43.079 "method": "bdev_nvme_attach_controller" 00:28:43.079 }' 00:28:43.079 [2024-11-19 23:53:17.334813] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:28:43.079 [2024-11-19 23:53:17.334907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263847 ] 00:28:43.338 [2024-11-19 23:53:17.410504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.338 [2024-11-19 23:53:17.457900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.242 Running I/O for 10 seconds... 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:45.242 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:45.500 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:45.500 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:45.500 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:45.500 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:45.500 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.500 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.500 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.500 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:45.500 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:45.500 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=146 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 146 -ge 100 ']' 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 263847 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 263847 ']' 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 263847 00:28:45.759 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:45.759 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.759 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263847 00:28:45.759 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:45.759 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:45.759 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263847' 00:28:45.759 killing process with pid 263847 00:28:45.759 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 263847 00:28:45.759 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 263847 00:28:46.018 Received shutdown signal, test time was about 0.976253 seconds 00:28:46.018 00:28:46.018 Latency(us) 00:28:46.018 [2024-11-19T22:53:20.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.018 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.018 Verification LBA range: start 0x0 length 0x400 00:28:46.018 Nvme1n1 : 0.93 276.00 17.25 0.00 0.00 228273.49 31651.46 236123.78 00:28:46.018 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.018 Verification LBA range: start 0x0 length 0x400 00:28:46.018 Nvme2n1 : 0.98 262.46 16.40 0.00 0.00 227227.88 17573.36 253211.69 00:28:46.018 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.018 Verification LBA range: start 0x0 length 0x400 00:28:46.018 Nvme3n1 : 0.94 273.53 17.10 0.00 0.00 222022.16 18447.17 253211.69 00:28:46.018 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.018 Verification LBA range: start 0x0 length 0x400 00:28:46.018 Nvme4n1 : 0.91 216.34 13.52 0.00 0.00 273324.62 3422.44 239230.67 00:28:46.018 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.018 Verification LBA range: start 0x0 length 0x400 00:28:46.018 Nvme5n1 : 0.92 214.58 13.41 0.00 0.00 269945.33 2936.98 262532.36 00:28:46.018 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.018 Verification LBA range: start 0x0 length 0x400 00:28:46.018 Nvme6n1 : 0.91 211.02 13.19 0.00 0.00 269320.34 20583.16 260978.92 00:28:46.018 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.018 Verification LBA range: start 0x0 length 0x400 00:28:46.018 Nvme7n1 : 0.90 214.21 13.39 0.00 0.00 258549.13 19806.44 243891.01 00:28:46.018 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.018 Verification LBA range: start 0x0 length 0x400 00:28:46.018 Nvme8n1 : 0.89 220.45 13.78 0.00 0.00 243071.47 2524.35 254765.13 00:28:46.018 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.018 Verification LBA range: start 0x0 length 0x400 00:28:46.018 Nvme9n1 : 0.93 206.38 12.90 0.00 0.00 257936.24 21262.79 287387.50 00:28:46.018 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.018 Verification LBA range: start 0x0 length 0x400 00:28:46.018 Nvme10n1 : 0.92 208.22 13.01 0.00 0.00 249425.16 19029.71 260978.92 00:28:46.018 [2024-11-19T22:53:20.330Z] =================================================================================================================== 00:28:46.018 [2024-11-19T22:53:20.330Z] Total : 2303.18 143.95 0.00 0.00 247811.05 2524.35 287387.50 00:28:46.278 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:47.216 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 263775 00:28:47.216 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:47.216 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:47.216 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.217 rmmod nvme_tcp 00:28:47.217 rmmod nvme_fabrics 00:28:47.217 rmmod nvme_keyring 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 263775 ']' 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 263775 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 263775 ']' 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 263775 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263775 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263775' 00:28:47.217 killing process with pid 263775 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 263775 00:28:47.217 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 263775 00:28:47.784 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:47.784 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:47.784 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:47.784 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:47.784 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:47.784 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:47.784 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:47.784 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:47.784 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:47.784 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.784 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.784 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.695 00:28:49.695 real 0m7.825s 00:28:49.695 user 0m23.841s 00:28:49.695 sys 0m1.509s 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:49.695 ************************************ 00:28:49.695 END TEST nvmf_shutdown_tc2 00:28:49.695 ************************************ 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:49.695 ************************************ 00:28:49.695 START TEST nvmf_shutdown_tc3 00:28:49.695 ************************************ 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:49.695 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:49.696 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:49.696 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:49.696 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:49.696 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:49.696 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:49.696 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:49.958 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:49.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:49.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:28:49.959 00:28:49.959 --- 10.0.0.2 ping statistics --- 00:28:49.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.959 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:49.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:49.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:28:49.959 00:28:49.959 --- 10.0.0.1 ping statistics --- 00:28:49.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.959 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=264793 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 264793 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 264793 ']' 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.959 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.959 [2024-11-19 23:53:24.205990] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:28:49.959 [2024-11-19 23:53:24.206112] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.219 [2024-11-19 23:53:24.287924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.219 [2024-11-19 23:53:24.337652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.219 [2024-11-19 23:53:24.337705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.219 [2024-11-19 23:53:24.337718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.219 [2024-11-19 23:53:24.337729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.219 [2024-11-19 23:53:24.337739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.219 [2024-11-19 23:53:24.339329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.219 [2024-11-19 23:53:24.339414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:50.219 [2024-11-19 23:53:24.339392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.219 [2024-11-19 23:53:24.339417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.219 [2024-11-19 23:53:24.492894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.219 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:50.477 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.478 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:50.478 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.478 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:50.478 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:50.478 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.478 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.478 Malloc1 00:28:50.478 [2024-11-19 23:53:24.594011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.478 Malloc2 00:28:50.478 Malloc3 00:28:50.478 Malloc4 00:28:50.478 Malloc5 00:28:50.737 Malloc6 00:28:50.737 Malloc7 00:28:50.737 Malloc8 00:28:50.737 Malloc9 00:28:50.737 Malloc10 00:28:50.737 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.737 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:50.737 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.737 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=264933 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 264933 /var/tmp/bdevperf.sock 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 264933 ']' 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:50.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.996 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.997 { 00:28:50.997 "params": { 00:28:50.997 "name": "Nvme$subsystem", 00:28:50.997 "trtype": "$TEST_TRANSPORT", 00:28:50.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.997 "adrfam": "ipv4", 00:28:50.997 "trsvcid": "$NVMF_PORT", 00:28:50.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.997 "hdgst": ${hdgst:-false}, 00:28:50.997 "ddgst": ${ddgst:-false} 00:28:50.997 }, 00:28:50.997 "method": "bdev_nvme_attach_controller" 00:28:50.997 } 00:28:50.997 EOF 00:28:50.997 )") 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.997 { 00:28:50.997 "params": { 00:28:50.997 "name": "Nvme$subsystem", 00:28:50.997 "trtype": "$TEST_TRANSPORT", 00:28:50.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.997 "adrfam": "ipv4", 00:28:50.997 "trsvcid": "$NVMF_PORT", 00:28:50.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.997 "hdgst": ${hdgst:-false}, 00:28:50.997 "ddgst": ${ddgst:-false} 00:28:50.997 }, 00:28:50.997 "method": "bdev_nvme_attach_controller" 00:28:50.997 } 00:28:50.997 EOF 00:28:50.997 )") 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.997 { 00:28:50.997 "params": { 00:28:50.997 "name": "Nvme$subsystem", 00:28:50.997 "trtype": "$TEST_TRANSPORT", 00:28:50.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.997 "adrfam": "ipv4", 00:28:50.997 "trsvcid": "$NVMF_PORT", 00:28:50.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.997 "hdgst": ${hdgst:-false}, 00:28:50.997 "ddgst": ${ddgst:-false} 00:28:50.997 }, 00:28:50.997 "method": "bdev_nvme_attach_controller" 00:28:50.997 } 00:28:50.997 EOF 00:28:50.997 )") 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.997 { 00:28:50.997 "params": { 00:28:50.997 "name": "Nvme$subsystem", 00:28:50.997 "trtype": "$TEST_TRANSPORT", 00:28:50.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.997 "adrfam": "ipv4", 00:28:50.997 "trsvcid": "$NVMF_PORT", 00:28:50.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.997 "hdgst": ${hdgst:-false}, 00:28:50.997 "ddgst": ${ddgst:-false} 00:28:50.997 }, 00:28:50.997 "method": "bdev_nvme_attach_controller" 00:28:50.997 } 00:28:50.997 EOF 00:28:50.997 )") 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.997 { 00:28:50.997 "params": { 00:28:50.997 "name": "Nvme$subsystem", 00:28:50.997 "trtype": "$TEST_TRANSPORT", 00:28:50.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.997 "adrfam": "ipv4", 00:28:50.997 "trsvcid": "$NVMF_PORT", 00:28:50.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.997 "hdgst": ${hdgst:-false}, 00:28:50.997 "ddgst": ${ddgst:-false} 00:28:50.997 }, 00:28:50.997 "method": "bdev_nvme_attach_controller" 00:28:50.997 } 00:28:50.997 EOF 00:28:50.997 )") 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.997 { 00:28:50.997 "params": { 00:28:50.997 "name": "Nvme$subsystem", 00:28:50.997 "trtype": "$TEST_TRANSPORT", 00:28:50.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.997 "adrfam": "ipv4", 00:28:50.997 "trsvcid": "$NVMF_PORT", 00:28:50.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.997 "hdgst": ${hdgst:-false}, 00:28:50.997 "ddgst": ${ddgst:-false} 00:28:50.997 }, 00:28:50.997 "method": "bdev_nvme_attach_controller" 00:28:50.997 } 00:28:50.997 EOF 00:28:50.997 )") 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.997 { 00:28:50.997 "params": { 00:28:50.997 "name": "Nvme$subsystem", 00:28:50.997 "trtype": "$TEST_TRANSPORT", 00:28:50.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.997 "adrfam": "ipv4", 00:28:50.997 "trsvcid": "$NVMF_PORT", 00:28:50.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.997 "hdgst": ${hdgst:-false}, 00:28:50.997 "ddgst": ${ddgst:-false} 00:28:50.997 }, 00:28:50.997 "method": "bdev_nvme_attach_controller" 00:28:50.997 } 00:28:50.997 EOF 00:28:50.997 )") 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.997 { 00:28:50.997 "params": { 00:28:50.997 "name": "Nvme$subsystem", 00:28:50.997 "trtype": "$TEST_TRANSPORT", 00:28:50.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.997 "adrfam": "ipv4", 00:28:50.997 "trsvcid": "$NVMF_PORT", 00:28:50.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.997 "hdgst": ${hdgst:-false}, 00:28:50.997 "ddgst": ${ddgst:-false} 00:28:50.997 }, 00:28:50.997 "method": "bdev_nvme_attach_controller" 00:28:50.997 } 00:28:50.997 EOF 00:28:50.997 )") 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.997 { 00:28:50.997 "params": { 00:28:50.997 "name": "Nvme$subsystem", 00:28:50.997 "trtype": "$TEST_TRANSPORT", 00:28:50.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.997 "adrfam": "ipv4", 00:28:50.997 "trsvcid": "$NVMF_PORT", 00:28:50.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.997 "hdgst": ${hdgst:-false}, 00:28:50.997 "ddgst": ${ddgst:-false} 00:28:50.997 }, 00:28:50.997 "method": "bdev_nvme_attach_controller" 00:28:50.997 } 00:28:50.997 EOF 00:28:50.997 )") 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.997 { 00:28:50.997 "params": { 00:28:50.997 "name": "Nvme$subsystem", 00:28:50.997 "trtype": "$TEST_TRANSPORT", 00:28:50.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.997 "adrfam": "ipv4", 00:28:50.997 "trsvcid": "$NVMF_PORT", 00:28:50.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.997 "hdgst": ${hdgst:-false}, 00:28:50.997 "ddgst": ${ddgst:-false} 00:28:50.997 }, 00:28:50.997 "method": "bdev_nvme_attach_controller" 00:28:50.997 } 00:28:50.997 EOF 00:28:50.997 )") 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:50.997 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:50.997 "params": { 00:28:50.997 "name": "Nvme1", 00:28:50.997 "trtype": "tcp", 00:28:50.997 "traddr": "10.0.0.2", 00:28:50.997 "adrfam": "ipv4", 00:28:50.997 "trsvcid": "4420", 00:28:50.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:50.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:50.997 "hdgst": false, 00:28:50.997 "ddgst": false 00:28:50.997 }, 00:28:50.997 "method": "bdev_nvme_attach_controller" 00:28:50.997 },{ 00:28:50.997 "params": { 00:28:50.998 "name": "Nvme2", 00:28:50.998 "trtype": "tcp", 00:28:50.998 "traddr": "10.0.0.2", 00:28:50.998 "adrfam": "ipv4", 00:28:50.998 "trsvcid": "4420", 00:28:50.998 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:50.998 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:50.998 "hdgst": false, 00:28:50.998 "ddgst": false 00:28:50.998 }, 00:28:50.998 "method": "bdev_nvme_attach_controller" 00:28:50.998 },{ 00:28:50.998 "params": { 00:28:50.998 "name": "Nvme3", 00:28:50.998 "trtype": "tcp", 00:28:50.998 "traddr": "10.0.0.2", 00:28:50.998 "adrfam": "ipv4", 00:28:50.998 "trsvcid": "4420", 00:28:50.998 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:50.998 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:50.998 "hdgst": false, 00:28:50.998 "ddgst": false 00:28:50.998 }, 00:28:50.998 "method": "bdev_nvme_attach_controller" 00:28:50.998 },{ 00:28:50.998 "params": { 00:28:50.998 "name": "Nvme4", 00:28:50.998 "trtype": "tcp", 00:28:50.998 "traddr": "10.0.0.2", 00:28:50.998 "adrfam": "ipv4", 00:28:50.998 "trsvcid": "4420", 00:28:50.998 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:50.998 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:50.998 "hdgst": false, 00:28:50.998 "ddgst": false 00:28:50.998 }, 00:28:50.998 "method": "bdev_nvme_attach_controller" 00:28:50.998 },{ 00:28:50.998 "params": { 00:28:50.998 "name": "Nvme5", 00:28:50.998 "trtype": "tcp", 00:28:50.998 "traddr": "10.0.0.2", 00:28:50.998 "adrfam": "ipv4", 00:28:50.998 "trsvcid": "4420", 00:28:50.998 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:50.998 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:50.998 "hdgst": false, 00:28:50.998 "ddgst": false 00:28:50.998 }, 00:28:50.998 "method": "bdev_nvme_attach_controller" 00:28:50.998 },{ 00:28:50.998 "params": { 00:28:50.998 "name": "Nvme6", 00:28:50.998 "trtype": "tcp", 00:28:50.998 "traddr": "10.0.0.2", 00:28:50.998 "adrfam": "ipv4", 00:28:50.998 "trsvcid": "4420", 00:28:50.998 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:50.998 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:50.998 "hdgst": false, 00:28:50.998 "ddgst": false 00:28:50.998 }, 00:28:50.998 "method": "bdev_nvme_attach_controller" 00:28:50.998 },{ 00:28:50.998 "params": { 00:28:50.998 "name": "Nvme7", 00:28:50.998 "trtype": "tcp", 00:28:50.998 "traddr": "10.0.0.2", 00:28:50.998 "adrfam": "ipv4", 00:28:50.998 "trsvcid": "4420", 00:28:50.998 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:50.998 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:50.998 "hdgst": false, 00:28:50.998 "ddgst": false 00:28:50.998 }, 00:28:50.998 "method": "bdev_nvme_attach_controller" 00:28:50.998 },{ 00:28:50.998 "params": { 00:28:50.998 "name": "Nvme8", 00:28:50.998 "trtype": "tcp", 00:28:50.998 "traddr": "10.0.0.2", 00:28:50.998 "adrfam": "ipv4", 00:28:50.998 "trsvcid": "4420", 00:28:50.998 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:50.998 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:50.998 "hdgst": false, 00:28:50.998 "ddgst": false 00:28:50.998 }, 00:28:50.998 "method": "bdev_nvme_attach_controller" 00:28:50.998 },{ 00:28:50.998 "params": { 00:28:50.998 "name": "Nvme9", 00:28:50.998 "trtype": "tcp", 00:28:50.998 "traddr": "10.0.0.2", 00:28:50.998 "adrfam": "ipv4", 00:28:50.998 "trsvcid": "4420", 00:28:50.998 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:50.998 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:50.998 "hdgst": false, 00:28:50.998 "ddgst": false 00:28:50.998 }, 00:28:50.998 "method": "bdev_nvme_attach_controller" 00:28:50.998 },{ 00:28:50.998 "params": { 00:28:50.998 "name": "Nvme10", 00:28:50.998 "trtype": "tcp", 00:28:50.998 "traddr": "10.0.0.2", 00:28:50.998 "adrfam": "ipv4", 00:28:50.998 "trsvcid": "4420", 00:28:50.998 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:50.998 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:50.998 "hdgst": false, 00:28:50.998 "ddgst": false 00:28:50.998 }, 00:28:50.998 "method": "bdev_nvme_attach_controller" 00:28:50.998 }' 00:28:50.998 [2024-11-19 23:53:25.120154] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:28:50.998 [2024-11-19 23:53:25.120232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264933 ] 00:28:50.998 [2024-11-19 23:53:25.191226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.998 [2024-11-19 23:53:25.238307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.902 Running I/O for 10 seconds... 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.902 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.160 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.160 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:53.160 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:53.160 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 264793 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 264793 ']' 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 264793 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 264793 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 264793' 00:28:53.433 killing process with pid 264793 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 264793 00:28:53.433 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 264793 00:28:53.433 [2024-11-19 23:53:27.565189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.433 [2024-11-19 23:53:27.565819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.565848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.565862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.565874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.565887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.565929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.565951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.565999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.566434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0c00 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.568988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.434 [2024-11-19 23:53:27.569513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.569525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.569538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.569551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.569572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.569586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.569598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.569610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.569623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.569635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3790 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.572870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10d0 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.572896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10d0 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.572914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10d0 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.572927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10d0 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.572940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10d0 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.572952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf10d0 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.577842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.577875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.577895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.577908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.577920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.577933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.577946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.577958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.577970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.577982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.577994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.578695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1f60 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.579824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.579853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.579867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.579879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.579891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.435 [2024-11-19 23:53:27.579905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.579917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.579935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.579948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.579961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.579974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.579993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.580686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2430 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.581995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.436 [2024-11-19 23:53:27.582401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.582859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2900 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.583939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.583971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.583987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.437 [2024-11-19 23:53:27.584422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.584763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2df0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.585989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.438 [2024-11-19 23:53:27.586221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.439 [2024-11-19 23:53:27.586233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.439 [2024-11-19 23:53:27.586244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.439 [2024-11-19 23:53:27.586255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf32c0 is same with the state(6) to be set 00:28:53.439 [2024-11-19 23:53:27.586855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.586898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.586928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.586945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.586961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.586975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.586990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.587986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.587999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.588014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.588027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.588042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.588055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.439 [2024-11-19 23:53:27.588096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.439 [2024-11-19 23:53:27.588113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.440 [2024-11-19 23:53:27.588889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.588936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.440 [2024-11-19 23:53:27.589922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.440 [2024-11-19 23:53:27.589949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.589965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.440 [2024-11-19 23:53:27.589993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.590008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.440 [2024-11-19 23:53:27.590022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.590037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.440 [2024-11-19 23:53:27.590050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.590081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa31450 is same with the state(6) to be set 00:28:53.440 [2024-11-19 23:53:27.590133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.440 [2024-11-19 23:53:27.590153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.590168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.440 [2024-11-19 23:53:27.590182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.590196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.440 [2024-11-19 23:53:27.590210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.590224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.440 [2024-11-19 23:53:27.590237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.590250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b5b0 is same with the state(6) to be set 00:28:53.440 [2024-11-19 23:53:27.590293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.440 [2024-11-19 23:53:27.590313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.590329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.440 [2024-11-19 23:53:27.590343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.590357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.440 [2024-11-19 23:53:27.590373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.590388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.440 [2024-11-19 23:53:27.590401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.440 [2024-11-19 23:53:27.590414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6b950 is same with the state(6) to be set 00:28:53.441 [2024-11-19 23:53:27.590463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.590490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.590506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.590520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.590535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.590549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.590564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.590577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.590590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9fad0 is same with the state(6) to be set 00:28:53.441 [2024-11-19 23:53:27.590647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.590667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.590682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.590696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.590710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.590724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.590738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.590752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.590765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29cc0 is same with the state(6) to be set 00:28:53.441 [2024-11-19 23:53:27.590813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.590833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.590849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.590863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.590877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.590890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.590904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.590918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.590931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6ca50 is same with the state(6) to be set 00:28:53.441 [2024-11-19 23:53:27.590984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c710 is same with the state(6) to be set 00:28:53.441 [2024-11-19 23:53:27.591175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2f700 is same with the state(6) to be set 00:28:53.441 [2024-11-19 23:53:27.591346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa390b0 is same with the state(6) to be set 00:28:53.441 [2024-11-19 23:53:27.591530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.441 [2024-11-19 23:53:27.591634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979f50 is same with the state(6) to be set 00:28:53.441 [2024-11-19 23:53:27.591774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.441 [2024-11-19 23:53:27.591797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.441 [2024-11-19 23:53:27.591835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.441 [2024-11-19 23:53:27.591865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.441 [2024-11-19 23:53:27.591896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.441 [2024-11-19 23:53:27.591925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.441 [2024-11-19 23:53:27.591955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.591971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.441 [2024-11-19 23:53:27.591984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.441 [2024-11-19 23:53:27.592005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.592973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.592988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.593002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.593017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.593031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.593047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.442 [2024-11-19 23:53:27.593078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.442 [2024-11-19 23:53:27.593095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.593785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.593798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.443 [2024-11-19 23:53:27.595585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.443 [2024-11-19 23:53:27.595601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.595638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.595667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.595710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.595739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.595769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.595799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.595828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.595858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.595887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.595917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.595946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.595975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.595989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.444 [2024-11-19 23:53:27.596809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.444 [2024-11-19 23:53:27.596823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.596839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.596853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.596873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.596887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.596914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.596928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.596944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.596958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.596985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.596999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.597984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.597997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.445 [2024-11-19 23:53:27.598518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.445 [2024-11-19 23:53:27.598536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.598983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.598997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.446 [2024-11-19 23:53:27.599593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.446 [2024-11-19 23:53:27.599812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:53.446 [2024-11-19 23:53:27.599861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2b5b0 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.603756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:53.447 [2024-11-19 23:53:27.603810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa390b0 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.603856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa31450 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.603891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6b950 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.603923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9fad0 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.603963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa29cc0 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.603994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6ca50 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.604025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8c710 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.604051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2f700 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.604095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979f50 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.605526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:53.447 [2024-11-19 23:53:27.605734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.447 [2024-11-19 23:53:27.605767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2b5b0 with addr=10.0.0.2, port=4420 00:28:53.447 [2024-11-19 23:53:27.605784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b5b0 is same with the state(6) to be set 00:28:53.447 [2024-11-19 23:53:27.605879] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:53.447 [2024-11-19 23:53:27.606513] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:53.447 [2024-11-19 23:53:27.606589] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:53.447 [2024-11-19 23:53:27.606659] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:53.447 [2024-11-19 23:53:27.606897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.447 [2024-11-19 23:53:27.606924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.447 [2024-11-19 23:53:27.606950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.447 [2024-11-19 23:53:27.606966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.447 [2024-11-19 23:53:27.606983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.447 [2024-11-19 23:53:27.606997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.447 [2024-11-19 23:53:27.607013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.447 [2024-11-19 23:53:27.607028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.447 [2024-11-19 23:53:27.607044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.447 [2024-11-19 23:53:27.607058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.447 [2024-11-19 23:53:27.607093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.447 [2024-11-19 23:53:27.607110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.447 [2024-11-19 23:53:27.607126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.447 [2024-11-19 23:53:27.607140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.447 [2024-11-19 23:53:27.607156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.447 [2024-11-19 23:53:27.607175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.447 [2024-11-19 23:53:27.607192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.447 [2024-11-19 23:53:27.607207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.447 [2024-11-19 23:53:27.607222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.447 [2024-11-19 23:53:27.607236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.447 [2024-11-19 23:53:27.607251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.447 [2024-11-19 23:53:27.607266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.447 [2024-11-19 23:53:27.607281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.447 [2024-11-19 23:53:27.607295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.447 [2024-11-19 23:53:27.607310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed60 is same with the state(6) to be set 00:28:53.447 [2024-11-19 23:53:27.607733] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:53.447 [2024-11-19 23:53:27.607776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:53.447 [2024-11-19 23:53:27.607905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.447 [2024-11-19 23:53:27.607935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa390b0 with addr=10.0.0.2, port=4420 00:28:53.447 [2024-11-19 23:53:27.607952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa390b0 is same with the state(6) to be set 00:28:53.447 [2024-11-19 23:53:27.608038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.447 [2024-11-19 23:53:27.608086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f700 with addr=10.0.0.2, port=4420 00:28:53.447 [2024-11-19 23:53:27.608104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2f700 is same with the state(6) to be set 00:28:53.447 [2024-11-19 23:53:27.608124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2b5b0 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.609197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:53.447 [2024-11-19 23:53:27.609309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.447 [2024-11-19 23:53:27.609338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8c710 with addr=10.0.0.2, port=4420 00:28:53.447 [2024-11-19 23:53:27.609355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c710 is same with the state(6) to be set 00:28:53.447 [2024-11-19 23:53:27.609375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa390b0 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.609404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2f700 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.609421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:53.447 [2024-11-19 23:53:27.609435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:53.447 [2024-11-19 23:53:27.609456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:53.447 [2024-11-19 23:53:27.609474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:53.447 [2024-11-19 23:53:27.609637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.447 [2024-11-19 23:53:27.609666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6ca50 with addr=10.0.0.2, port=4420 00:28:53.447 [2024-11-19 23:53:27.609683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6ca50 is same with the state(6) to be set 00:28:53.447 [2024-11-19 23:53:27.609702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8c710 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.609728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:53.447 [2024-11-19 23:53:27.609742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:53.447 [2024-11-19 23:53:27.609756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:53.447 [2024-11-19 23:53:27.609770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:53.447 [2024-11-19 23:53:27.609784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:53.447 [2024-11-19 23:53:27.609798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:53.447 [2024-11-19 23:53:27.609811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:53.447 [2024-11-19 23:53:27.609824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:53.447 [2024-11-19 23:53:27.610171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6ca50 (9): Bad file descriptor 00:28:53.447 [2024-11-19 23:53:27.610197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:53.447 [2024-11-19 23:53:27.610212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:53.447 [2024-11-19 23:53:27.610225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:53.447 [2024-11-19 23:53:27.610238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:53.447 [2024-11-19 23:53:27.610292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:53.447 [2024-11-19 23:53:27.610311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:53.447 [2024-11-19 23:53:27.610324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:53.447 [2024-11-19 23:53:27.610338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:53.447 [2024-11-19 23:53:27.613925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.613958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.613993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.614977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.614991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.615007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.615021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.615037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.615051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.615079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.615095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.615111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.615125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.615140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.615153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.615169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.615183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.448 [2024-11-19 23:53:27.615199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.448 [2024-11-19 23:53:27.615214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.615950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.615965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa33cc0 is same with the state(6) to be set 00:28:53.449 [2024-11-19 23:53:27.617258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.449 [2024-11-19 23:53:27.617704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.449 [2024-11-19 23:53:27.617721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.617736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.617753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.617766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.617782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.617803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.617819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.617833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.617848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.617863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.617878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.617892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.617908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.617922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.617938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.617951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.617966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.617981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.617998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.450 [2024-11-19 23:53:27.618903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.450 [2024-11-19 23:53:27.618916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.618932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.618945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.618961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.618975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.618991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.619008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.619024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.619038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.619053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.619088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.619106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.619120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.619135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.619149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.619164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.619178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.619193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.619207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.619223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.619236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.619252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.619270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.619285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe39850 is same with the state(6) to be set 00:28:53.451 [2024-11-19 23:53:27.620585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.620641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.620683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.620712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.620743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.620772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.620802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.620832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.620861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.620891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.620920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.620949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.620984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.620998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.621013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.621028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.621044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.621076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.621094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.621108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.621124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.621138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.621154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.621167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.621182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.621196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.621211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.621225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.621241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.621255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.621271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.451 [2024-11-19 23:53:27.621285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.451 [2024-11-19 23:53:27.621301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.621983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.621998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.452 [2024-11-19 23:53:27.622485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.452 [2024-11-19 23:53:27.622499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.622519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.622533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.622548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.622562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.622576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3ad80 is same with the state(6) to be set 00:28:53.453 [2024-11-19 23:53:27.623851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.623875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.623896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.623921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.623937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.623951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.623967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.623980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.623996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.624980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.624995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.453 [2024-11-19 23:53:27.625010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.453 [2024-11-19 23:53:27.625024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.625866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.625882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3c340 is same with the state(6) to be set 00:28:53.454 [2024-11-19 23:53:27.627165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.627189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.627209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.627224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.627240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.627254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.627270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.627283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.627299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.627313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.627329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.627343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.627359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.627372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.627394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.627408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.627429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.627443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.627459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.627472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.627488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.627502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.454 [2024-11-19 23:53:27.627517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.454 [2024-11-19 23:53:27.627532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.627977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.627992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.455 [2024-11-19 23:53:27.628639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.455 [2024-11-19 23:53:27.628654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.628668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.628683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.628697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.628712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.628726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.628742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.628755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.628771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.628785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.628800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.628813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.628828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.628841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.628857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.628871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.628886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.628899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.628914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.628928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.628948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.628963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.628978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.628993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.629009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.629023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.629038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.629052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.629083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.629099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.629116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.456 [2024-11-19 23:53:27.629129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.456 [2024-11-19 23:53:27.629144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1088380 is same with the state(6) to be set 00:28:53.456 [2024-11-19 23:53:27.631502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:53.456 [2024-11-19 23:53:27.631536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:53.456 [2024-11-19 23:53:27.631556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:53.456 [2024-11-19 23:53:27.631575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:53.456 [2024-11-19 23:53:27.631709] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:53.456 task offset: 28928 on job bdev=Nvme3n1 fails 00:28:53.456 00:28:53.456 Latency(us) 00:28:53.456 [2024-11-19T22:53:27.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.456 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.456 Job: Nvme1n1 ended in about 0.90 seconds with error 00:28:53.456 Verification LBA range: start 0x0 length 0x400 00:28:53.456 Nvme1n1 : 0.90 141.47 8.84 70.73 0.00 298306.12 22719.15 259425.47 00:28:53.456 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.456 Job: Nvme2n1 ended in about 0.89 seconds with error 00:28:53.456 Verification LBA range: start 0x0 length 0x400 00:28:53.456 Nvme2n1 : 0.89 216.02 13.50 72.01 0.00 215048.72 14078.10 256318.58 00:28:53.456 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.456 Job: Nvme3n1 ended in about 0.88 seconds with error 00:28:53.456 Verification LBA range: start 0x0 length 0x400 00:28:53.456 Nvme3n1 : 0.88 217.51 13.59 72.50 0.00 208978.20 7378.87 237677.23 00:28:53.456 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.456 Job: Nvme4n1 ended in about 0.89 seconds with error 00:28:53.456 Verification LBA range: start 0x0 length 0x400 00:28:53.456 Nvme4n1 : 0.89 215.74 13.48 71.91 0.00 206208.76 12087.75 262532.36 00:28:53.456 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.456 Job: Nvme5n1 ended in about 0.91 seconds with error 00:28:53.456 Verification LBA range: start 0x0 length 0x400 00:28:53.456 Nvme5n1 : 0.91 140.96 8.81 70.48 0.00 275024.40 22039.51 264085.81 00:28:53.456 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.456 Job: Nvme6n1 ended in about 0.91 seconds with error 00:28:53.456 Verification LBA range: start 0x0 length 0x400 00:28:53.456 Nvme6n1 : 0.91 140.45 8.78 70.22 0.00 270198.96 32816.55 245444.46 00:28:53.456 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.456 Job: Nvme7n1 ended in about 0.91 seconds with error 00:28:53.456 Verification LBA range: start 0x0 length 0x400 00:28:53.456 Nvme7n1 : 0.91 139.94 8.75 69.97 0.00 265357.34 18738.44 260978.92 00:28:53.456 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.456 Job: Nvme8n1 ended in about 0.90 seconds with error 00:28:53.456 Verification LBA range: start 0x0 length 0x400 00:28:53.456 Nvme8n1 : 0.90 211.87 13.24 13.38 0.00 240168.74 2912.71 267192.70 00:28:53.456 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.456 Job: Nvme9n1 ended in about 0.89 seconds with error 00:28:53.456 Verification LBA range: start 0x0 length 0x400 00:28:53.456 Nvme9n1 : 0.89 143.60 8.98 71.80 0.00 245796.16 13010.11 276513.37 00:28:53.456 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.456 Job: Nvme10n1 ended in about 0.92 seconds with error 00:28:53.456 Verification LBA range: start 0x0 length 0x400 00:28:53.456 Nvme10n1 : 0.92 139.45 8.72 69.72 0.00 248699.01 21359.88 281173.71 00:28:53.456 [2024-11-19T22:53:27.768Z] =================================================================================================================== 00:28:53.456 [2024-11-19T22:53:27.768Z] Total : 1707.00 106.69 652.74 0.00 243969.73 2912.71 281173.71 00:28:53.456 [2024-11-19 23:53:27.659927] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:53.456 [2024-11-19 23:53:27.660022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:53.456 [2024-11-19 23:53:27.660336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.456 [2024-11-19 23:53:27.660385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa31450 with addr=10.0.0.2, port=4420 00:28:53.456 [2024-11-19 23:53:27.660406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa31450 is same with the state(6) to be set 00:28:53.456 [2024-11-19 23:53:27.660510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.456 [2024-11-19 23:53:27.660538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6b950 with addr=10.0.0.2, port=4420 00:28:53.456 [2024-11-19 23:53:27.660554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6b950 is same with the state(6) to be set 00:28:53.456 [2024-11-19 23:53:27.660641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.456 [2024-11-19 23:53:27.660668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x979f50 with addr=10.0.0.2, port=4420 00:28:53.456 [2024-11-19 23:53:27.660684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x979f50 is same with the state(6) to be set 00:28:53.456 [2024-11-19 23:53:27.660774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.456 [2024-11-19 23:53:27.660801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa29cc0 with addr=10.0.0.2, port=4420 00:28:53.456 [2024-11-19 23:53:27.660817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29cc0 is same with the state(6) to be set 00:28:53.456 [2024-11-19 23:53:27.662246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:53.457 [2024-11-19 23:53:27.662276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:53.457 [2024-11-19 23:53:27.662311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:53.457 [2024-11-19 23:53:27.662330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:53.457 [2024-11-19 23:53:27.662359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:53.457 [2024-11-19 23:53:27.662536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.457 [2024-11-19 23:53:27.662566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9fad0 with addr=10.0.0.2, port=4420 00:28:53.457 [2024-11-19 23:53:27.662583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9fad0 is same with the state(6) to be set 00:28:53.457 [2024-11-19 23:53:27.662609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa31450 (9): Bad file descriptor 00:28:53.457 [2024-11-19 23:53:27.662631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6b950 (9): Bad file descriptor 00:28:53.457 [2024-11-19 23:53:27.662649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x979f50 (9): Bad file descriptor 00:28:53.457 [2024-11-19 23:53:27.662667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa29cc0 (9): Bad file descriptor 00:28:53.457 [2024-11-19 23:53:27.662738] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:53.457 [2024-11-19 23:53:27.662764] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:53.457 [2024-11-19 23:53:27.662783] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:53.457 [2024-11-19 23:53:27.662802] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:53.457 [2024-11-19 23:53:27.663199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.457 [2024-11-19 23:53:27.663230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2b5b0 with addr=10.0.0.2, port=4420 00:28:53.457 [2024-11-19 23:53:27.663247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2b5b0 is same with the state(6) to be set 00:28:53.457 [2024-11-19 23:53:27.663339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.457 [2024-11-19 23:53:27.663374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2f700 with addr=10.0.0.2, port=4420 00:28:53.457 [2024-11-19 23:53:27.663389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2f700 is same with the state(6) to be set 00:28:53.457 [2024-11-19 23:53:27.663472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.457 [2024-11-19 23:53:27.663498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa390b0 with addr=10.0.0.2, port=4420 00:28:53.457 [2024-11-19 23:53:27.663514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa390b0 is same with the state(6) to be set 00:28:53.457 [2024-11-19 23:53:27.663603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.457 [2024-11-19 23:53:27.663628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8c710 with addr=10.0.0.2, port=4420 00:28:53.457 [2024-11-19 23:53:27.663645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c710 is same with the state(6) to be set 00:28:53.457 [2024-11-19 23:53:27.663721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.457 [2024-11-19 23:53:27.663747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe6ca50 with addr=10.0.0.2, port=4420 00:28:53.457 [2024-11-19 23:53:27.663763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6ca50 is same with the state(6) to be set 00:28:53.457 [2024-11-19 23:53:27.663788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9fad0 (9): Bad file descriptor 00:28:53.457 [2024-11-19 23:53:27.663807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:53.457 [2024-11-19 23:53:27.663820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:53.457 [2024-11-19 23:53:27.663836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:53.457 [2024-11-19 23:53:27.663852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:53.457 [2024-11-19 23:53:27.663869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:53.457 [2024-11-19 23:53:27.663882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:53.457 [2024-11-19 23:53:27.663895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:53.457 [2024-11-19 23:53:27.663907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:53.457 [2024-11-19 23:53:27.663921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:53.457 [2024-11-19 23:53:27.663933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:53.457 [2024-11-19 23:53:27.663946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:53.457 [2024-11-19 23:53:27.663958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:53.457 [2024-11-19 23:53:27.663972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:53.457 [2024-11-19 23:53:27.663984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:53.457 [2024-11-19 23:53:27.663997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:53.457 [2024-11-19 23:53:27.664010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:53.457 [2024-11-19 23:53:27.664134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2b5b0 (9): Bad file descriptor 00:28:53.457 [2024-11-19 23:53:27.664161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2f700 (9): Bad file descriptor 00:28:53.457 [2024-11-19 23:53:27.664179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa390b0 (9): Bad file descriptor 00:28:53.457 [2024-11-19 23:53:27.664197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8c710 (9): Bad file descriptor 00:28:53.457 [2024-11-19 23:53:27.664214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe6ca50 (9): Bad file descriptor 00:28:53.457 [2024-11-19 23:53:27.664229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:53.457 [2024-11-19 23:53:27.664242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:53.457 [2024-11-19 23:53:27.664256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:53.457 [2024-11-19 23:53:27.664269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:53.457 [2024-11-19 23:53:27.664306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:53.457 [2024-11-19 23:53:27.664323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:53.457 [2024-11-19 23:53:27.664342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:53.457 [2024-11-19 23:53:27.664362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:53.457 [2024-11-19 23:53:27.664377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:53.457 [2024-11-19 23:53:27.664389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:53.457 [2024-11-19 23:53:27.664402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:53.457 [2024-11-19 23:53:27.664426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:53.457 [2024-11-19 23:53:27.664440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:53.457 [2024-11-19 23:53:27.664453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:53.457 [2024-11-19 23:53:27.664465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:53.457 [2024-11-19 23:53:27.664478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:53.457 [2024-11-19 23:53:27.664491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:53.457 [2024-11-19 23:53:27.664503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:53.457 [2024-11-19 23:53:27.664516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:53.457 [2024-11-19 23:53:27.664529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:53.457 [2024-11-19 23:53:27.664543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:53.457 [2024-11-19 23:53:27.664556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:53.457 [2024-11-19 23:53:27.664570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:53.457 [2024-11-19 23:53:27.664582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:54.025 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 264933 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 264933 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 264933 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:54.965 rmmod nvme_tcp 00:28:54.965 rmmod nvme_fabrics 00:28:54.965 rmmod nvme_keyring 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 264793 ']' 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 264793 00:28:54.965 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 264793 ']' 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 264793 00:28:54.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (264793) - No such process 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 264793 is not found' 00:28:54.966 Process with pid 264793 is not found 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.966 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:56.929 00:28:56.929 real 0m7.188s 00:28:56.929 user 0m17.108s 00:28:56.929 sys 0m1.492s 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:56.929 ************************************ 00:28:56.929 END TEST nvmf_shutdown_tc3 00:28:56.929 ************************************ 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:56.929 ************************************ 00:28:56.929 START TEST nvmf_shutdown_tc4 00:28:56.929 ************************************ 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.929 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:56.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:56.930 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.930 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:57.189 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:57.189 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.189 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:57.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:28:57.190 00:28:57.190 --- 10.0.0.2 ping statistics --- 00:28:57.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.190 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:28:57.190 00:28:57.190 --- 10.0.0.1 ping statistics --- 00:28:57.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.190 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=265834 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 265834 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 265834 ']' 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:57.190 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:57.448 [2024-11-19 23:53:31.518235] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:28:57.448 [2024-11-19 23:53:31.518325] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.448 [2024-11-19 23:53:31.595910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:57.448 [2024-11-19 23:53:31.648143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.449 [2024-11-19 23:53:31.648213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.449 [2024-11-19 23:53:31.648238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.449 [2024-11-19 23:53:31.648250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.449 [2024-11-19 23:53:31.648260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.449 [2024-11-19 23:53:31.649959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.449 [2024-11-19 23:53:31.650040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.449 [2024-11-19 23:53:31.650104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:57.449 [2024-11-19 23:53:31.650108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:57.707 [2024-11-19 23:53:31.802976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.707 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:57.707 Malloc1 00:28:57.707 [2024-11-19 23:53:31.891798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.708 Malloc2 00:28:57.708 Malloc3 00:28:57.708 Malloc4 00:28:57.967 Malloc5 00:28:57.967 Malloc6 00:28:57.967 Malloc7 00:28:57.967 Malloc8 00:28:57.967 Malloc9 00:28:58.227 Malloc10 00:28:58.227 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.227 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:58.227 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.227 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:58.228 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=266011 00:28:58.228 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:58.228 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:58.228 [2024-11-19 23:53:32.415507] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:03.505 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:03.505 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 265834 00:29:03.505 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 265834 ']' 00:29:03.505 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 265834 00:29:03.505 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:03.505 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.505 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 265834 00:29:03.505 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:03.505 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:03.505 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 265834' 00:29:03.505 killing process with pid 265834 00:29:03.505 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 265834 00:29:03.505 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 265834 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 [2024-11-19 23:53:37.400828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.505 starting I/O failed: -6 00:29:03.505 starting I/O failed: -6 00:29:03.505 NVMe io qpair process completion error 00:29:03.505 [2024-11-19 23:53:37.402084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2310 is same with the state(6) to be set 00:29:03.505 [2024-11-19 23:53:37.402151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2310 is same with the state(6) to be set 00:29:03.505 [2024-11-19 23:53:37.402168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2310 is same with the state(6) to be set 00:29:03.505 [2024-11-19 23:53:37.402182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2310 is same with the state(6) to be set 00:29:03.505 [2024-11-19 23:53:37.402195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2310 is same with the state(6) to be set 00:29:03.505 [2024-11-19 23:53:37.402208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2310 is same with the state(6) to be set 00:29:03.505 [2024-11-19 23:53:37.402222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2310 is same with the state(6) to be set 00:29:03.505 [2024-11-19 23:53:37.402235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2310 is same with the state(6) to be set 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 [2024-11-19 23:53:37.405419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4750 is same with the state(6) to be set 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 [2024-11-19 23:53:37.405464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4750 is same with the state(6) to be set 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 [2024-11-19 23:53:37.405479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4750 is same with the state(6) to be set 00:29:03.505 [2024-11-19 23:53:37.405500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4750 is same with the state(6) to be set 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.505 starting I/O failed: -6 00:29:03.505 [2024-11-19 23:53:37.405513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4750 is same with the state(6) to be set 00:29:03.505 Write completed with error (sct=0, sc=8) 00:29:03.506 [2024-11-19 23:53:37.405526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4750 is same with the state(6) to be set 00:29:03.506 [2024-11-19 23:53:37.405539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4750 is same with the state(6) to be set 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 [2024-11-19 23:53:37.405551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4750 is same with the state(6) to be set 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 [2024-11-19 23:53:37.405563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4750 is same with the state(6) to be set 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 [2024-11-19 23:53:37.405623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 [2024-11-19 23:53:37.406820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.506 [2024-11-19 23:53:37.407919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.506 Write completed with error (sct=0, sc=8) 00:29:03.506 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 [2024-11-19 23:53:37.409548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.507 NVMe io qpair process completion error 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 [2024-11-19 23:53:37.410312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3a10 is same with Write completed with error (sct=0, sc=8) 00:29:03.507 the state(6) to be set 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 [2024-11-19 23:53:37.410363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3a10 is same with the state(6) to be set 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 [2024-11-19 23:53:37.410379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3a10 is same with the state(6) to be set 00:29:03.507 [2024-11-19 23:53:37.410395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3a10 is same with Write completed with error (sct=0, sc=8) 00:29:03.507 the state(6) to be set 00:29:03.507 [2024-11-19 23:53:37.410408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3a10 is same with the state(6) to be set 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 [2024-11-19 23:53:37.410426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3a10 is same with starting I/O failed: -6 00:29:03.507 the state(6) to be set 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 [2024-11-19 23:53:37.410441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3a10 is same with the state(6) to be set 00:29:03.507 [2024-11-19 23:53:37.410454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3a10 is same with the state(6) to be set 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 [2024-11-19 23:53:37.410467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3a10 is same with the state(6) to be set 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 [2024-11-19 23:53:37.410479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3a10 is same with the state(6) to be set 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 starting I/O failed: -6 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.507 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 [2024-11-19 23:53:37.410789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 [2024-11-19 23:53:37.411300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e43d0 is same with Write completed with error (sct=0, sc=8) 00:29:03.508 the state(6) to be set 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 [2024-11-19 23:53:37.411349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e43d0 is same with the state(6) to be set 00:29:03.508 [2024-11-19 23:53:37.411366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e43d0 is same with the state(6) to be set 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 [2024-11-19 23:53:37.411382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e43d0 is same with the state(6) to be set 00:29:03.508 starting I/O failed: -6 00:29:03.508 [2024-11-19 23:53:37.411395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e43d0 is same with the state(6) to be set 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 [2024-11-19 23:53:37.411823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 starting I/O failed: -6 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.508 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 [2024-11-19 23:53:37.412907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.509 starting I/O failed: -6 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 [2024-11-19 23:53:37.414549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.509 NVMe io qpair process completion error 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 starting I/O failed: -6 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.509 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 [2024-11-19 23:53:37.415664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 [2024-11-19 23:53:37.416719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 [2024-11-19 23:53:37.417799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.510 Write completed with error (sct=0, sc=8) 00:29:03.510 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 [2024-11-19 23:53:37.419690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.511 NVMe io qpair process completion error 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 Write completed with error (sct=0, sc=8) 00:29:03.511 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 [2024-11-19 23:53:37.420783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 [2024-11-19 23:53:37.421861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.512 starting I/O failed: -6 00:29:03.512 starting I/O failed: -6 00:29:03.512 starting I/O failed: -6 00:29:03.512 starting I/O failed: -6 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 [2024-11-19 23:53:37.423217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.512 Write completed with error (sct=0, sc=8) 00:29:03.512 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 [2024-11-19 23:53:37.425321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.513 NVMe io qpair process completion error 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 [2024-11-19 23:53:37.426498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 starting I/O failed: -6 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.513 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 [2024-11-19 23:53:37.427553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 [2024-11-19 23:53:37.428647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.514 Write completed with error (sct=0, sc=8) 00:29:03.514 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 [2024-11-19 23:53:37.431096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.515 NVMe io qpair process completion error 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 [2024-11-19 23:53:37.432400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.515 starting I/O failed: -6 00:29:03.515 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 [2024-11-19 23:53:37.433464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 [2024-11-19 23:53:37.434697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.516 Write completed with error (sct=0, sc=8) 00:29:03.516 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 [2024-11-19 23:53:37.437000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:03.517 NVMe io qpair process completion error 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 [2024-11-19 23:53:37.438210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:03.517 starting I/O failed: -6 00:29:03.517 starting I/O failed: -6 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 starting I/O failed: -6 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.517 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 [2024-11-19 23:53:37.439287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 [2024-11-19 23:53:37.440429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.518 Write completed with error (sct=0, sc=8) 00:29:03.518 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 [2024-11-19 23:53:37.442722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.519 NVMe io qpair process completion error 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 [2024-11-19 23:53:37.443901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 [2024-11-19 23:53:37.444850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.519 starting I/O failed: -6 00:29:03.519 starting I/O failed: -6 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 Write completed with error (sct=0, sc=8) 00:29:03.519 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 [2024-11-19 23:53:37.446181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.520 starting I/O failed: -6 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 starting I/O failed: -6 00:29:03.520 [2024-11-19 23:53:37.448390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.520 NVMe io qpair process completion error 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.520 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 [2024-11-19 23:53:37.451972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 starting I/O failed: -6 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.521 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 [2024-11-19 23:53:37.453015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 [2024-11-19 23:53:37.454202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.522 Write completed with error (sct=0, sc=8) 00:29:03.522 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 Write completed with error (sct=0, sc=8) 00:29:03.523 starting I/O failed: -6 00:29:03.523 [2024-11-19 23:53:37.456603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.523 NVMe io qpair process completion error 00:29:03.523 Initializing NVMe Controllers 00:29:03.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:03.523 Controller IO queue size 128, less than required. 00:29:03.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:03.523 Controller IO queue size 128, less than required. 00:29:03.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:03.523 Controller IO queue size 128, less than required. 00:29:03.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.523 Controller IO queue size 128, less than required. 00:29:03.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:03.523 Controller IO queue size 128, less than required. 00:29:03.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:03.523 Controller IO queue size 128, less than required. 00:29:03.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:03.523 Controller IO queue size 128, less than required. 00:29:03.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:03.523 Controller IO queue size 128, less than required. 00:29:03.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:03.523 Controller IO queue size 128, less than required. 00:29:03.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:03.523 Controller IO queue size 128, less than required. 00:29:03.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:03.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:03.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:03.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:03.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:03.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:03.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:03.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:03.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:03.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:03.523 Initialization complete. Launching workers. 00:29:03.523 ======================================================== 00:29:03.524 Latency(us) 00:29:03.524 Device Information : IOPS MiB/s Average min max 00:29:03.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1797.96 77.26 71214.43 1105.38 120727.18 00:29:03.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1847.30 79.38 69347.02 823.42 148278.89 00:29:03.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1881.69 80.85 68112.87 899.35 116248.44 00:29:03.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1835.98 78.89 69734.07 1152.11 115597.70 00:29:03.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1842.39 79.17 69575.75 914.76 129541.38 00:29:03.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1828.93 78.59 69343.75 1114.41 112682.14 00:29:03.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1823.59 78.36 69567.90 874.96 114468.72 00:29:03.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1812.92 77.90 69994.84 1070.94 113878.68 00:29:03.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1802.02 77.43 70442.43 856.72 113130.01 00:29:03.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1798.60 77.28 70603.43 1109.59 112607.18 00:29:03.524 ======================================================== 00:29:03.524 Total : 18271.40 785.10 69783.35 823.42 148278.89 00:29:03.524 00:29:03.524 [2024-11-19 23:53:37.461416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c29c40 is same with the state(6) to be set 00:29:03.524 [2024-11-19 23:53:37.461522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33a40 is same with the state(6) to be set 00:29:03.524 [2024-11-19 23:53:37.461589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c24d40 is same with the state(6) to be set 00:29:03.524 [2024-11-19 23:53:37.461648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c07330 is same with the state(6) to be set 00:29:03.524 [2024-11-19 23:53:37.461710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1af40 is same with the state(6) to be set 00:29:03.524 [2024-11-19 23:53:37.461768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0c240 is same with the state(6) to be set 00:29:03.524 [2024-11-19 23:53:37.461826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c16040 is same with the state(6) to be set 00:29:03.524 [2024-11-19 23:53:37.461893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1fe40 is same with the state(6) to be set 00:29:03.524 [2024-11-19 23:53:37.461950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2eb40 is same with the state(6) to be set 00:29:03.524 [2024-11-19 23:53:37.462007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11140 is same with the state(6) to be set 00:29:03.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:03.784 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 266011 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 266011 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 266011 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.724 rmmod nvme_tcp 00:29:04.724 rmmod nvme_fabrics 00:29:04.724 rmmod nvme_keyring 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 265834 ']' 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 265834 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 265834 ']' 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 265834 00:29:04.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (265834) - No such process 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 265834 is not found' 00:29:04.724 Process with pid 265834 is not found 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.724 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.262 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.262 00:29:07.262 real 0m9.749s 00:29:07.262 user 0m22.299s 00:29:07.262 sys 0m5.961s 00:29:07.262 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.262 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.262 ************************************ 00:29:07.262 END TEST nvmf_shutdown_tc4 00:29:07.262 ************************************ 00:29:07.262 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:07.262 00:29:07.262 real 0m36.943s 00:29:07.262 user 1m37.589s 00:29:07.262 sys 0m12.451s 00:29:07.262 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.262 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.262 ************************************ 00:29:07.262 END TEST nvmf_shutdown 00:29:07.262 ************************************ 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:07.262 ************************************ 00:29:07.262 START TEST nvmf_nsid 00:29:07.262 ************************************ 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:07.262 * Looking for test storage... 00:29:07.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.262 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:07.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.262 --rc genhtml_branch_coverage=1 00:29:07.262 --rc genhtml_function_coverage=1 00:29:07.262 --rc genhtml_legend=1 00:29:07.262 --rc geninfo_all_blocks=1 00:29:07.262 --rc geninfo_unexecuted_blocks=1 00:29:07.262 00:29:07.262 ' 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:07.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.263 --rc genhtml_branch_coverage=1 00:29:07.263 --rc genhtml_function_coverage=1 00:29:07.263 --rc genhtml_legend=1 00:29:07.263 --rc geninfo_all_blocks=1 00:29:07.263 --rc geninfo_unexecuted_blocks=1 00:29:07.263 00:29:07.263 ' 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:07.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.263 --rc genhtml_branch_coverage=1 00:29:07.263 --rc genhtml_function_coverage=1 00:29:07.263 --rc genhtml_legend=1 00:29:07.263 --rc geninfo_all_blocks=1 00:29:07.263 --rc geninfo_unexecuted_blocks=1 00:29:07.263 00:29:07.263 ' 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:07.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.263 --rc genhtml_branch_coverage=1 00:29:07.263 --rc genhtml_function_coverage=1 00:29:07.263 --rc genhtml_legend=1 00:29:07.263 --rc geninfo_all_blocks=1 00:29:07.263 --rc geninfo_unexecuted_blocks=1 00:29:07.263 00:29:07.263 ' 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:07.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.263 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:09.170 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:09.171 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:09.171 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:09.171 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:09.171 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:09.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:29:09.171 00:29:09.171 --- 10.0.0.2 ping statistics --- 00:29:09.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.171 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:29:09.171 00:29:09.171 --- 10.0.0.1 ping statistics --- 00:29:09.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.171 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=268639 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 268639 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 268639 ']' 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.171 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:09.430 [2024-11-19 23:53:43.517392] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:29:09.430 [2024-11-19 23:53:43.517498] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.430 [2024-11-19 23:53:43.592153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.430 [2024-11-19 23:53:43.636878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.430 [2024-11-19 23:53:43.636934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.430 [2024-11-19 23:53:43.636963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.430 [2024-11-19 23:53:43.636980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.430 [2024-11-19 23:53:43.636995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.430 [2024-11-19 23:53:43.637663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=268766 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=8fc88e19-89f9-403c-9d9d-1aac4ff7b356 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=4ea47ec5-67b2-44fb-a1c5-6f74f8e8f534 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=84a21789-8102-4778-9f0f-0fdcbaaad6cf 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:09.689 null0 00:29:09.689 null1 00:29:09.689 null2 00:29:09.689 [2024-11-19 23:53:43.815505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.689 [2024-11-19 23:53:43.822625] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:29:09.689 [2024-11-19 23:53:43.822704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid268766 ] 00:29:09.689 [2024-11-19 23:53:43.839741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 268766 /var/tmp/tgt2.sock 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 268766 ']' 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:09.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.689 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:09.689 [2024-11-19 23:53:43.889669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.689 [2024-11-19 23:53:43.934754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.948 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.948 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:09.948 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:10.519 [2024-11-19 23:53:44.603298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.519 [2024-11-19 23:53:44.619534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:10.519 nvme0n1 nvme0n2 00:29:10.519 nvme1n1 00:29:10.519 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:10.519 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:10.519 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:11.089 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 8fc88e19-89f9-403c-9d9d-1aac4ff7b356 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8fc88e1989f9403c9d9d1aac4ff7b356 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8FC88E1989F9403C9D9D1AAC4FF7B356 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8FC88E1989F9403C9D9D1AAC4FF7B356 == \8\F\C\8\8\E\1\9\8\9\F\9\4\0\3\C\9\D\9\D\1\A\A\C\4\F\F\7\B\3\5\6 ]] 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 4ea47ec5-67b2-44fb-a1c5-6f74f8e8f534 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:12.027 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4ea47ec567b244fba1c56f74f8e8f534 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4EA47EC567B244FBA1C56F74F8E8F534 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 4EA47EC567B244FBA1C56F74F8E8F534 == \4\E\A\4\7\E\C\5\6\7\B\2\4\4\F\B\A\1\C\5\6\F\7\4\F\8\E\8\F\5\3\4 ]] 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 84a21789-8102-4778-9f0f-0fdcbaaad6cf 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=84a21789810247789f0f0fdcbaaad6cf 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 84A21789810247789F0F0FDCBAAAD6CF 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 84A21789810247789F0F0FDCBAAAD6CF == \8\4\A\2\1\7\8\9\8\1\0\2\4\7\7\8\9\F\0\F\0\F\D\C\B\A\A\A\D\6\C\F ]] 00:29:12.287 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 268766 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 268766 ']' 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 268766 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 268766 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 268766' 00:29:12.547 killing process with pid 268766 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 268766 00:29:12.547 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 268766 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:12.806 rmmod nvme_tcp 00:29:12.806 rmmod nvme_fabrics 00:29:12.806 rmmod nvme_keyring 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 268639 ']' 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 268639 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 268639 ']' 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 268639 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.806 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 268639 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 268639' 00:29:13.065 killing process with pid 268639 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 268639 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 268639 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.065 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.607 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:15.607 00:29:15.607 real 0m8.328s 00:29:15.607 user 0m8.171s 00:29:15.607 sys 0m2.632s 00:29:15.607 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.607 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:15.607 ************************************ 00:29:15.607 END TEST nvmf_nsid 00:29:15.607 ************************************ 00:29:15.607 23:53:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:15.607 00:29:15.607 real 18m14.249s 00:29:15.607 user 50m30.773s 00:29:15.607 sys 3m59.155s 00:29:15.607 23:53:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.607 23:53:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:15.607 ************************************ 00:29:15.607 END TEST nvmf_target_extra 00:29:15.607 ************************************ 00:29:15.607 23:53:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:15.607 23:53:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:15.607 23:53:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.607 23:53:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:15.607 ************************************ 00:29:15.607 START TEST nvmf_host 00:29:15.607 ************************************ 00:29:15.607 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:15.607 * Looking for test storage... 00:29:15.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:15.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.608 --rc genhtml_branch_coverage=1 00:29:15.608 --rc genhtml_function_coverage=1 00:29:15.608 --rc genhtml_legend=1 00:29:15.608 --rc geninfo_all_blocks=1 00:29:15.608 --rc geninfo_unexecuted_blocks=1 00:29:15.608 00:29:15.608 ' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:15.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.608 --rc genhtml_branch_coverage=1 00:29:15.608 --rc genhtml_function_coverage=1 00:29:15.608 --rc genhtml_legend=1 00:29:15.608 --rc geninfo_all_blocks=1 00:29:15.608 --rc geninfo_unexecuted_blocks=1 00:29:15.608 00:29:15.608 ' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:15.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.608 --rc genhtml_branch_coverage=1 00:29:15.608 --rc genhtml_function_coverage=1 00:29:15.608 --rc genhtml_legend=1 00:29:15.608 --rc geninfo_all_blocks=1 00:29:15.608 --rc geninfo_unexecuted_blocks=1 00:29:15.608 00:29:15.608 ' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:15.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.608 --rc genhtml_branch_coverage=1 00:29:15.608 --rc genhtml_function_coverage=1 00:29:15.608 --rc genhtml_legend=1 00:29:15.608 --rc geninfo_all_blocks=1 00:29:15.608 --rc geninfo_unexecuted_blocks=1 00:29:15.608 00:29:15.608 ' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:15.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.608 ************************************ 00:29:15.608 START TEST nvmf_multicontroller 00:29:15.608 ************************************ 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:15.608 * Looking for test storage... 00:29:15.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:15.608 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:15.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.609 --rc genhtml_branch_coverage=1 00:29:15.609 --rc genhtml_function_coverage=1 00:29:15.609 --rc genhtml_legend=1 00:29:15.609 --rc geninfo_all_blocks=1 00:29:15.609 --rc geninfo_unexecuted_blocks=1 00:29:15.609 00:29:15.609 ' 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:15.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.609 --rc genhtml_branch_coverage=1 00:29:15.609 --rc genhtml_function_coverage=1 00:29:15.609 --rc genhtml_legend=1 00:29:15.609 --rc geninfo_all_blocks=1 00:29:15.609 --rc geninfo_unexecuted_blocks=1 00:29:15.609 00:29:15.609 ' 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:15.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.609 --rc genhtml_branch_coverage=1 00:29:15.609 --rc genhtml_function_coverage=1 00:29:15.609 --rc genhtml_legend=1 00:29:15.609 --rc geninfo_all_blocks=1 00:29:15.609 --rc geninfo_unexecuted_blocks=1 00:29:15.609 00:29:15.609 ' 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:15.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.609 --rc genhtml_branch_coverage=1 00:29:15.609 --rc genhtml_function_coverage=1 00:29:15.609 --rc genhtml_legend=1 00:29:15.609 --rc geninfo_all_blocks=1 00:29:15.609 --rc geninfo_unexecuted_blocks=1 00:29:15.609 00:29:15.609 ' 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:15.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:15.609 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.610 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:15.610 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:15.610 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:15.610 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.610 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.610 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.610 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:15.610 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:15.610 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:15.610 23:53:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.514 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:17.515 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:17.515 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:17.515 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:17.515 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.515 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.773 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:17.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:29:17.774 00:29:17.774 --- 10.0.0.2 ping statistics --- 00:29:17.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.774 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:29:17.774 00:29:17.774 --- 10.0.0.1 ping statistics --- 00:29:17.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.774 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=271198 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 271198 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 271198 ']' 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.774 23:53:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:17.774 [2024-11-19 23:53:52.008132] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:29:17.774 [2024-11-19 23:53:52.008209] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.032 [2024-11-19 23:53:52.087770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:18.032 [2024-11-19 23:53:52.137428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.032 [2024-11-19 23:53:52.137495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.032 [2024-11-19 23:53:52.137511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.032 [2024-11-19 23:53:52.137524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.032 [2024-11-19 23:53:52.137535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.032 [2024-11-19 23:53:52.139103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.032 [2024-11-19 23:53:52.139183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.032 [2024-11-19 23:53:52.139186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.032 [2024-11-19 23:53:52.280215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.032 Malloc0 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.032 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.291 [2024-11-19 23:53:52.346361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.291 [2024-11-19 23:53:52.354243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.291 Malloc1 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=271231 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 271231 /var/tmp/bdevperf.sock 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 271231 ']' 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:18.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.291 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.548 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.548 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:18.548 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:18.548 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.548 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.807 NVMe0n1 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.807 1 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.807 request: 00:29:18.807 { 00:29:18.807 "name": "NVMe0", 00:29:18.807 "trtype": "tcp", 00:29:18.807 "traddr": "10.0.0.2", 00:29:18.807 "adrfam": "ipv4", 00:29:18.807 "trsvcid": "4420", 00:29:18.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:18.807 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:18.807 "hostaddr": "10.0.0.1", 00:29:18.807 "prchk_reftag": false, 00:29:18.807 "prchk_guard": false, 00:29:18.807 "hdgst": false, 00:29:18.807 "ddgst": false, 00:29:18.807 "allow_unrecognized_csi": false, 00:29:18.807 "method": "bdev_nvme_attach_controller", 00:29:18.807 "req_id": 1 00:29:18.807 } 00:29:18.807 Got JSON-RPC error response 00:29:18.807 response: 00:29:18.807 { 00:29:18.807 "code": -114, 00:29:18.807 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:18.807 } 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.807 request: 00:29:18.807 { 00:29:18.807 "name": "NVMe0", 00:29:18.807 "trtype": "tcp", 00:29:18.807 "traddr": "10.0.0.2", 00:29:18.807 "adrfam": "ipv4", 00:29:18.807 "trsvcid": "4420", 00:29:18.807 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:18.807 "hostaddr": "10.0.0.1", 00:29:18.807 "prchk_reftag": false, 00:29:18.807 "prchk_guard": false, 00:29:18.807 "hdgst": false, 00:29:18.807 "ddgst": false, 00:29:18.807 "allow_unrecognized_csi": false, 00:29:18.807 "method": "bdev_nvme_attach_controller", 00:29:18.807 "req_id": 1 00:29:18.807 } 00:29:18.807 Got JSON-RPC error response 00:29:18.807 response: 00:29:18.807 { 00:29:18.807 "code": -114, 00:29:18.807 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:18.807 } 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:18.807 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.808 request: 00:29:18.808 { 00:29:18.808 "name": "NVMe0", 00:29:18.808 "trtype": "tcp", 00:29:18.808 "traddr": "10.0.0.2", 00:29:18.808 "adrfam": "ipv4", 00:29:18.808 "trsvcid": "4420", 00:29:18.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:18.808 "hostaddr": "10.0.0.1", 00:29:18.808 "prchk_reftag": false, 00:29:18.808 "prchk_guard": false, 00:29:18.808 "hdgst": false, 00:29:18.808 "ddgst": false, 00:29:18.808 "multipath": "disable", 00:29:18.808 "allow_unrecognized_csi": false, 00:29:18.808 "method": "bdev_nvme_attach_controller", 00:29:18.808 "req_id": 1 00:29:18.808 } 00:29:18.808 Got JSON-RPC error response 00:29:18.808 response: 00:29:18.808 { 00:29:18.808 "code": -114, 00:29:18.808 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:18.808 } 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.808 request: 00:29:18.808 { 00:29:18.808 "name": "NVMe0", 00:29:18.808 "trtype": "tcp", 00:29:18.808 "traddr": "10.0.0.2", 00:29:18.808 "adrfam": "ipv4", 00:29:18.808 "trsvcid": "4420", 00:29:18.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:18.808 "hostaddr": "10.0.0.1", 00:29:18.808 "prchk_reftag": false, 00:29:18.808 "prchk_guard": false, 00:29:18.808 "hdgst": false, 00:29:18.808 "ddgst": false, 00:29:18.808 "multipath": "failover", 00:29:18.808 "allow_unrecognized_csi": false, 00:29:18.808 "method": "bdev_nvme_attach_controller", 00:29:18.808 "req_id": 1 00:29:18.808 } 00:29:18.808 Got JSON-RPC error response 00:29:18.808 response: 00:29:18.808 { 00:29:18.808 "code": -114, 00:29:18.808 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:18.808 } 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.808 23:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.065 NVMe0n1 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.065 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.065 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.323 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:19.323 23:53:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:20.260 { 00:29:20.260 "results": [ 00:29:20.260 { 00:29:20.260 "job": "NVMe0n1", 00:29:20.260 "core_mask": "0x1", 00:29:20.260 "workload": "write", 00:29:20.260 "status": "finished", 00:29:20.260 "queue_depth": 128, 00:29:20.260 "io_size": 4096, 00:29:20.260 "runtime": 1.006966, 00:29:20.260 "iops": 18541.837559560103, 00:29:20.260 "mibps": 72.42905296703165, 00:29:20.260 "io_failed": 0, 00:29:20.260 "io_timeout": 0, 00:29:20.260 "avg_latency_us": 6892.47183491134, 00:29:20.260 "min_latency_us": 3131.1644444444446, 00:29:20.260 "max_latency_us": 12136.296296296296 00:29:20.260 } 00:29:20.260 ], 00:29:20.260 "core_count": 1 00:29:20.260 } 00:29:20.260 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:20.260 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.260 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.260 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.260 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:20.260 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 271231 00:29:20.260 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 271231 ']' 00:29:20.260 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 271231 00:29:20.260 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:20.260 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.260 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 271231 00:29:20.519 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:20.519 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:20.519 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 271231' 00:29:20.519 killing process with pid 271231 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 271231 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 271231 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:20.520 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:20.520 [2024-11-19 23:53:52.458819] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:29:20.520 [2024-11-19 23:53:52.458905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid271231 ] 00:29:20.520 [2024-11-19 23:53:52.525889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.520 [2024-11-19 23:53:52.572213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.520 [2024-11-19 23:53:53.357175] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name d9f67e44-8f3e-4e84-93f9-ef3c8536c247 already exists 00:29:20.520 [2024-11-19 23:53:53.357214] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:d9f67e44-8f3e-4e84-93f9-ef3c8536c247 alias for bdev NVMe1n1 00:29:20.520 [2024-11-19 23:53:53.357236] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:20.520 Running I/O for 1 seconds... 00:29:20.520 18543.00 IOPS, 72.43 MiB/s 00:29:20.520 Latency(us) 00:29:20.520 [2024-11-19T22:53:54.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.520 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:20.520 NVMe0n1 : 1.01 18541.84 72.43 0.00 0.00 6892.47 3131.16 12136.30 00:29:20.520 [2024-11-19T22:53:54.832Z] =================================================================================================================== 00:29:20.520 [2024-11-19T22:53:54.832Z] Total : 18541.84 72.43 0.00 0.00 6892.47 3131.16 12136.30 00:29:20.520 Received shutdown signal, test time was about 1.000000 seconds 00:29:20.520 00:29:20.520 Latency(us) 00:29:20.520 [2024-11-19T22:53:54.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.520 [2024-11-19T22:53:54.832Z] =================================================================================================================== 00:29:20.520 [2024-11-19T22:53:54.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.520 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:20.520 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:20.520 rmmod nvme_tcp 00:29:20.520 rmmod nvme_fabrics 00:29:20.520 rmmod nvme_keyring 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 271198 ']' 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 271198 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 271198 ']' 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 271198 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 271198 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 271198' 00:29:20.779 killing process with pid 271198 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 271198 00:29:20.779 23:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 271198 00:29:21.038 23:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.038 23:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.038 23:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.038 23:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:21.038 23:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:21.038 23:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.038 23:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.038 23:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.038 23:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.038 23:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.038 23:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.038 23:53:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.950 23:53:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:22.950 00:29:22.950 real 0m7.563s 00:29:22.950 user 0m12.140s 00:29:22.950 sys 0m2.359s 00:29:22.950 23:53:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:22.950 23:53:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.950 ************************************ 00:29:22.950 END TEST nvmf_multicontroller 00:29:22.950 ************************************ 00:29:22.950 23:53:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:22.950 23:53:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:22.950 23:53:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:22.950 23:53:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.950 ************************************ 00:29:22.950 START TEST nvmf_aer 00:29:22.950 ************************************ 00:29:22.950 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:23.209 * Looking for test storage... 00:29:23.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.209 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:23.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.210 --rc genhtml_branch_coverage=1 00:29:23.210 --rc genhtml_function_coverage=1 00:29:23.210 --rc genhtml_legend=1 00:29:23.210 --rc geninfo_all_blocks=1 00:29:23.210 --rc geninfo_unexecuted_blocks=1 00:29:23.210 00:29:23.210 ' 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:23.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.210 --rc genhtml_branch_coverage=1 00:29:23.210 --rc genhtml_function_coverage=1 00:29:23.210 --rc genhtml_legend=1 00:29:23.210 --rc geninfo_all_blocks=1 00:29:23.210 --rc geninfo_unexecuted_blocks=1 00:29:23.210 00:29:23.210 ' 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:23.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.210 --rc genhtml_branch_coverage=1 00:29:23.210 --rc genhtml_function_coverage=1 00:29:23.210 --rc genhtml_legend=1 00:29:23.210 --rc geninfo_all_blocks=1 00:29:23.210 --rc geninfo_unexecuted_blocks=1 00:29:23.210 00:29:23.210 ' 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:23.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.210 --rc genhtml_branch_coverage=1 00:29:23.210 --rc genhtml_function_coverage=1 00:29:23.210 --rc genhtml_legend=1 00:29:23.210 --rc geninfo_all_blocks=1 00:29:23.210 --rc geninfo_unexecuted_blocks=1 00:29:23.210 00:29:23.210 ' 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:23.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:23.210 23:53:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:25.117 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:25.117 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:25.117 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:25.117 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:29:25.117 00:29:25.117 --- 10.0.0.2 ping statistics --- 00:29:25.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.117 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:29:25.117 00:29:25.117 --- 10.0.0.1 ping statistics --- 00:29:25.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.117 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.117 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:25.376 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.376 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.376 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.376 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.376 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.376 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.376 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.376 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:25.376 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.376 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.376 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.376 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=273446 00:29:25.377 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:25.377 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 273446 00:29:25.377 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 273446 ']' 00:29:25.377 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.377 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.377 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.377 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.377 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.377 [2024-11-19 23:53:59.501464] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:29:25.377 [2024-11-19 23:53:59.501542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.377 [2024-11-19 23:53:59.573300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.377 [2024-11-19 23:53:59.621221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.377 [2024-11-19 23:53:59.621278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.377 [2024-11-19 23:53:59.621299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.377 [2024-11-19 23:53:59.621317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.377 [2024-11-19 23:53:59.621332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.377 [2024-11-19 23:53:59.622992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.377 [2024-11-19 23:53:59.623139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.377 [2024-11-19 23:53:59.623177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.377 [2024-11-19 23:53:59.623172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.637 [2024-11-19 23:53:59.774855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.637 Malloc0 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.637 [2024-11-19 23:53:59.851185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.637 [ 00:29:25.637 { 00:29:25.637 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:25.637 "subtype": "Discovery", 00:29:25.637 "listen_addresses": [], 00:29:25.637 "allow_any_host": true, 00:29:25.637 "hosts": [] 00:29:25.637 }, 00:29:25.637 { 00:29:25.637 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.637 "subtype": "NVMe", 00:29:25.637 "listen_addresses": [ 00:29:25.637 { 00:29:25.637 "trtype": "TCP", 00:29:25.637 "adrfam": "IPv4", 00:29:25.637 "traddr": "10.0.0.2", 00:29:25.637 "trsvcid": "4420" 00:29:25.637 } 00:29:25.637 ], 00:29:25.637 "allow_any_host": true, 00:29:25.637 "hosts": [], 00:29:25.637 "serial_number": "SPDK00000000000001", 00:29:25.637 "model_number": "SPDK bdev Controller", 00:29:25.637 "max_namespaces": 2, 00:29:25.637 "min_cntlid": 1, 00:29:25.637 "max_cntlid": 65519, 00:29:25.637 "namespaces": [ 00:29:25.637 { 00:29:25.637 "nsid": 1, 00:29:25.637 "bdev_name": "Malloc0", 00:29:25.637 "name": "Malloc0", 00:29:25.637 "nguid": "1C2C1D68B4984B128EDEEFD19B720223", 00:29:25.637 "uuid": "1c2c1d68-b498-4b12-8ede-efd19b720223" 00:29:25.637 } 00:29:25.637 ] 00:29:25.637 } 00:29:25.637 ] 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=273585 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:25.637 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:25.897 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:25.897 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:25.897 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:25.897 23:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.897 Malloc1 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.897 [ 00:29:25.897 { 00:29:25.897 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:25.897 "subtype": "Discovery", 00:29:25.897 "listen_addresses": [], 00:29:25.897 "allow_any_host": true, 00:29:25.897 "hosts": [] 00:29:25.897 }, 00:29:25.897 { 00:29:25.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.897 "subtype": "NVMe", 00:29:25.897 "listen_addresses": [ 00:29:25.897 { 00:29:25.897 "trtype": "TCP", 00:29:25.897 "adrfam": "IPv4", 00:29:25.897 "traddr": "10.0.0.2", 00:29:25.897 "trsvcid": "4420" 00:29:25.897 } 00:29:25.897 ], 00:29:25.897 "allow_any_host": true, 00:29:25.897 "hosts": [], 00:29:25.897 "serial_number": "SPDK00000000000001", 00:29:25.897 "model_number": "SPDK bdev Controller", 00:29:25.897 "max_namespaces": 2, 00:29:25.897 "min_cntlid": 1, 00:29:25.897 "max_cntlid": 65519, 00:29:25.897 "namespaces": [ 00:29:25.897 { 00:29:25.897 "nsid": 1, 00:29:25.897 "bdev_name": "Malloc0", 00:29:25.897 "name": "Malloc0", 00:29:25.897 "nguid": "1C2C1D68B4984B128EDEEFD19B720223", 00:29:25.897 "uuid": "1c2c1d68-b498-4b12-8ede-efd19b720223" 00:29:25.897 }, 00:29:25.897 { 00:29:25.897 "nsid": 2, 00:29:25.897 "bdev_name": "Malloc1", 00:29:25.897 "name": "Malloc1", 00:29:25.897 "nguid": "3F4006526EF4471DA07A98EA590E0A86", 00:29:25.897 "uuid": "3f400652-6ef4-471d-a07a-98ea590e0a86" 00:29:25.897 } 00:29:25.897 ] 00:29:25.897 } 00:29:25.897 ] 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 273585 00:29:25.897 Asynchronous Event Request test 00:29:25.897 Attaching to 10.0.0.2 00:29:25.897 Attached to 10.0.0.2 00:29:25.897 Registering asynchronous event callbacks... 00:29:25.897 Starting namespace attribute notice tests for all controllers... 00:29:25.897 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:25.897 aer_cb - Changed Namespace 00:29:25.897 Cleaning up... 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.897 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.156 rmmod nvme_tcp 00:29:26.156 rmmod nvme_fabrics 00:29:26.156 rmmod nvme_keyring 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 273446 ']' 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 273446 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 273446 ']' 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 273446 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 273446 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 273446' 00:29:26.156 killing process with pid 273446 00:29:26.156 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 273446 00:29:26.157 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 273446 00:29:26.416 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.416 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.416 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.416 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:26.416 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:26.416 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.416 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.416 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.416 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.416 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.416 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.416 23:54:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.320 23:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.320 00:29:28.320 real 0m5.336s 00:29:28.320 user 0m4.275s 00:29:28.320 sys 0m1.843s 00:29:28.320 23:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.320 23:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:28.320 ************************************ 00:29:28.320 END TEST nvmf_aer 00:29:28.320 ************************************ 00:29:28.320 23:54:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:28.320 23:54:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:28.320 23:54:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.320 23:54:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.320 ************************************ 00:29:28.320 START TEST nvmf_async_init 00:29:28.320 ************************************ 00:29:28.320 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:28.579 * Looking for test storage... 00:29:28.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:28.579 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:28.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.580 --rc genhtml_branch_coverage=1 00:29:28.580 --rc genhtml_function_coverage=1 00:29:28.580 --rc genhtml_legend=1 00:29:28.580 --rc geninfo_all_blocks=1 00:29:28.580 --rc geninfo_unexecuted_blocks=1 00:29:28.580 00:29:28.580 ' 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:28.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.580 --rc genhtml_branch_coverage=1 00:29:28.580 --rc genhtml_function_coverage=1 00:29:28.580 --rc genhtml_legend=1 00:29:28.580 --rc geninfo_all_blocks=1 00:29:28.580 --rc geninfo_unexecuted_blocks=1 00:29:28.580 00:29:28.580 ' 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:28.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.580 --rc genhtml_branch_coverage=1 00:29:28.580 --rc genhtml_function_coverage=1 00:29:28.580 --rc genhtml_legend=1 00:29:28.580 --rc geninfo_all_blocks=1 00:29:28.580 --rc geninfo_unexecuted_blocks=1 00:29:28.580 00:29:28.580 ' 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:28.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.580 --rc genhtml_branch_coverage=1 00:29:28.580 --rc genhtml_function_coverage=1 00:29:28.580 --rc genhtml_legend=1 00:29:28.580 --rc geninfo_all_blocks=1 00:29:28.580 --rc geninfo_unexecuted_blocks=1 00:29:28.580 00:29:28.580 ' 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d90622747eed48e797856ef28d70cfcc 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.580 23:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:30.485 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:30.485 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:30.485 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:30.485 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:30.485 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:30.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:29:30.759 00:29:30.759 --- 10.0.0.2 ping statistics --- 00:29:30.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.759 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:29:30.759 00:29:30.759 --- 10.0.0.1 ping statistics --- 00:29:30.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.759 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=275642 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 275642 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 275642 ']' 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.759 23:54:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:30.759 [2024-11-19 23:54:04.974810] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:29:30.759 [2024-11-19 23:54:04.974885] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.759 [2024-11-19 23:54:05.047411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.017 [2024-11-19 23:54:05.094266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.017 [2024-11-19 23:54:05.094318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.017 [2024-11-19 23:54:05.094342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.017 [2024-11-19 23:54:05.094368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.017 [2024-11-19 23:54:05.094384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.017 [2024-11-19 23:54:05.095017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.017 [2024-11-19 23:54:05.242212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.017 null0 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d90622747eed48e797856ef28d70cfcc 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.017 [2024-11-19 23:54:05.282540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.017 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.273 nvme0n1 00:29:31.273 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.273 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:31.273 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.273 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.273 [ 00:29:31.273 { 00:29:31.273 "name": "nvme0n1", 00:29:31.273 "aliases": [ 00:29:31.273 "d9062274-7eed-48e7-9785-6ef28d70cfcc" 00:29:31.273 ], 00:29:31.273 "product_name": "NVMe disk", 00:29:31.273 "block_size": 512, 00:29:31.273 "num_blocks": 2097152, 00:29:31.273 "uuid": "d9062274-7eed-48e7-9785-6ef28d70cfcc", 00:29:31.273 "numa_id": 0, 00:29:31.273 "assigned_rate_limits": { 00:29:31.273 "rw_ios_per_sec": 0, 00:29:31.273 "rw_mbytes_per_sec": 0, 00:29:31.273 "r_mbytes_per_sec": 0, 00:29:31.273 "w_mbytes_per_sec": 0 00:29:31.273 }, 00:29:31.273 "claimed": false, 00:29:31.273 "zoned": false, 00:29:31.273 "supported_io_types": { 00:29:31.273 "read": true, 00:29:31.273 "write": true, 00:29:31.273 "unmap": false, 00:29:31.273 "flush": true, 00:29:31.273 "reset": true, 00:29:31.273 "nvme_admin": true, 00:29:31.273 "nvme_io": true, 00:29:31.273 "nvme_io_md": false, 00:29:31.273 "write_zeroes": true, 00:29:31.273 "zcopy": false, 00:29:31.273 "get_zone_info": false, 00:29:31.273 "zone_management": false, 00:29:31.273 "zone_append": false, 00:29:31.273 "compare": true, 00:29:31.273 "compare_and_write": true, 00:29:31.273 "abort": true, 00:29:31.273 "seek_hole": false, 00:29:31.273 "seek_data": false, 00:29:31.273 "copy": true, 00:29:31.273 "nvme_iov_md": false 00:29:31.273 }, 00:29:31.273 "memory_domains": [ 00:29:31.273 { 00:29:31.273 "dma_device_id": "system", 00:29:31.273 "dma_device_type": 1 00:29:31.273 } 00:29:31.273 ], 00:29:31.273 "driver_specific": { 00:29:31.273 "nvme": [ 00:29:31.273 { 00:29:31.273 "trid": { 00:29:31.273 "trtype": "TCP", 00:29:31.273 "adrfam": "IPv4", 00:29:31.273 "traddr": "10.0.0.2", 00:29:31.273 "trsvcid": "4420", 00:29:31.273 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:31.273 }, 00:29:31.273 "ctrlr_data": { 00:29:31.273 "cntlid": 1, 00:29:31.273 "vendor_id": "0x8086", 00:29:31.273 "model_number": "SPDK bdev Controller", 00:29:31.273 "serial_number": "00000000000000000000", 00:29:31.273 "firmware_revision": "25.01", 00:29:31.273 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:31.273 "oacs": { 00:29:31.273 "security": 0, 00:29:31.273 "format": 0, 00:29:31.273 "firmware": 0, 00:29:31.273 "ns_manage": 0 00:29:31.273 }, 00:29:31.273 "multi_ctrlr": true, 00:29:31.273 "ana_reporting": false 00:29:31.273 }, 00:29:31.273 "vs": { 00:29:31.273 "nvme_version": "1.3" 00:29:31.273 }, 00:29:31.273 "ns_data": { 00:29:31.273 "id": 1, 00:29:31.273 "can_share": true 00:29:31.273 } 00:29:31.273 } 00:29:31.273 ], 00:29:31.273 "mp_policy": "active_passive" 00:29:31.273 } 00:29:31.273 } 00:29:31.273 ] 00:29:31.273 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.273 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:31.273 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.273 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.273 [2024-11-19 23:54:05.536126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:31.273 [2024-11-19 23:54:05.536211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2320480 (9): Bad file descriptor 00:29:31.531 [2024-11-19 23:54:05.668221] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.531 [ 00:29:31.531 { 00:29:31.531 "name": "nvme0n1", 00:29:31.531 "aliases": [ 00:29:31.531 "d9062274-7eed-48e7-9785-6ef28d70cfcc" 00:29:31.531 ], 00:29:31.531 "product_name": "NVMe disk", 00:29:31.531 "block_size": 512, 00:29:31.531 "num_blocks": 2097152, 00:29:31.531 "uuid": "d9062274-7eed-48e7-9785-6ef28d70cfcc", 00:29:31.531 "numa_id": 0, 00:29:31.531 "assigned_rate_limits": { 00:29:31.531 "rw_ios_per_sec": 0, 00:29:31.531 "rw_mbytes_per_sec": 0, 00:29:31.531 "r_mbytes_per_sec": 0, 00:29:31.531 "w_mbytes_per_sec": 0 00:29:31.531 }, 00:29:31.531 "claimed": false, 00:29:31.531 "zoned": false, 00:29:31.531 "supported_io_types": { 00:29:31.531 "read": true, 00:29:31.531 "write": true, 00:29:31.531 "unmap": false, 00:29:31.531 "flush": true, 00:29:31.531 "reset": true, 00:29:31.531 "nvme_admin": true, 00:29:31.531 "nvme_io": true, 00:29:31.531 "nvme_io_md": false, 00:29:31.531 "write_zeroes": true, 00:29:31.531 "zcopy": false, 00:29:31.531 "get_zone_info": false, 00:29:31.531 "zone_management": false, 00:29:31.531 "zone_append": false, 00:29:31.531 "compare": true, 00:29:31.531 "compare_and_write": true, 00:29:31.531 "abort": true, 00:29:31.531 "seek_hole": false, 00:29:31.531 "seek_data": false, 00:29:31.531 "copy": true, 00:29:31.531 "nvme_iov_md": false 00:29:31.531 }, 00:29:31.531 "memory_domains": [ 00:29:31.531 { 00:29:31.531 "dma_device_id": "system", 00:29:31.531 "dma_device_type": 1 00:29:31.531 } 00:29:31.531 ], 00:29:31.531 "driver_specific": { 00:29:31.531 "nvme": [ 00:29:31.531 { 00:29:31.531 "trid": { 00:29:31.531 "trtype": "TCP", 00:29:31.531 "adrfam": "IPv4", 00:29:31.531 "traddr": "10.0.0.2", 00:29:31.531 "trsvcid": "4420", 00:29:31.531 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:31.531 }, 00:29:31.531 "ctrlr_data": { 00:29:31.531 "cntlid": 2, 00:29:31.531 "vendor_id": "0x8086", 00:29:31.531 "model_number": "SPDK bdev Controller", 00:29:31.531 "serial_number": "00000000000000000000", 00:29:31.531 "firmware_revision": "25.01", 00:29:31.531 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:31.531 "oacs": { 00:29:31.531 "security": 0, 00:29:31.531 "format": 0, 00:29:31.531 "firmware": 0, 00:29:31.531 "ns_manage": 0 00:29:31.531 }, 00:29:31.531 "multi_ctrlr": true, 00:29:31.531 "ana_reporting": false 00:29:31.531 }, 00:29:31.531 "vs": { 00:29:31.531 "nvme_version": "1.3" 00:29:31.531 }, 00:29:31.531 "ns_data": { 00:29:31.531 "id": 1, 00:29:31.531 "can_share": true 00:29:31.531 } 00:29:31.531 } 00:29:31.531 ], 00:29:31.531 "mp_policy": "active_passive" 00:29:31.531 } 00:29:31.531 } 00:29:31.531 ] 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.glsK5A6JMZ 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.glsK5A6JMZ 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.glsK5A6JMZ 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.531 [2024-11-19 23:54:05.724779] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:31.531 [2024-11-19 23:54:05.724943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.531 [2024-11-19 23:54:05.740824] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:31.531 nvme0n1 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.531 [ 00:29:31.531 { 00:29:31.531 "name": "nvme0n1", 00:29:31.531 "aliases": [ 00:29:31.531 "d9062274-7eed-48e7-9785-6ef28d70cfcc" 00:29:31.531 ], 00:29:31.531 "product_name": "NVMe disk", 00:29:31.531 "block_size": 512, 00:29:31.531 "num_blocks": 2097152, 00:29:31.531 "uuid": "d9062274-7eed-48e7-9785-6ef28d70cfcc", 00:29:31.531 "numa_id": 0, 00:29:31.531 "assigned_rate_limits": { 00:29:31.531 "rw_ios_per_sec": 0, 00:29:31.531 "rw_mbytes_per_sec": 0, 00:29:31.531 "r_mbytes_per_sec": 0, 00:29:31.531 "w_mbytes_per_sec": 0 00:29:31.531 }, 00:29:31.531 "claimed": false, 00:29:31.531 "zoned": false, 00:29:31.531 "supported_io_types": { 00:29:31.531 "read": true, 00:29:31.531 "write": true, 00:29:31.531 "unmap": false, 00:29:31.531 "flush": true, 00:29:31.531 "reset": true, 00:29:31.531 "nvme_admin": true, 00:29:31.531 "nvme_io": true, 00:29:31.531 "nvme_io_md": false, 00:29:31.531 "write_zeroes": true, 00:29:31.531 "zcopy": false, 00:29:31.531 "get_zone_info": false, 00:29:31.531 "zone_management": false, 00:29:31.531 "zone_append": false, 00:29:31.531 "compare": true, 00:29:31.531 "compare_and_write": true, 00:29:31.531 "abort": true, 00:29:31.531 "seek_hole": false, 00:29:31.531 "seek_data": false, 00:29:31.531 "copy": true, 00:29:31.531 "nvme_iov_md": false 00:29:31.531 }, 00:29:31.531 "memory_domains": [ 00:29:31.531 { 00:29:31.531 "dma_device_id": "system", 00:29:31.531 "dma_device_type": 1 00:29:31.531 } 00:29:31.531 ], 00:29:31.531 "driver_specific": { 00:29:31.531 "nvme": [ 00:29:31.531 { 00:29:31.531 "trid": { 00:29:31.531 "trtype": "TCP", 00:29:31.531 "adrfam": "IPv4", 00:29:31.531 "traddr": "10.0.0.2", 00:29:31.531 "trsvcid": "4421", 00:29:31.531 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:31.531 }, 00:29:31.531 "ctrlr_data": { 00:29:31.531 "cntlid": 3, 00:29:31.531 "vendor_id": "0x8086", 00:29:31.531 "model_number": "SPDK bdev Controller", 00:29:31.531 "serial_number": "00000000000000000000", 00:29:31.531 "firmware_revision": "25.01", 00:29:31.531 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:31.531 "oacs": { 00:29:31.531 "security": 0, 00:29:31.531 "format": 0, 00:29:31.531 "firmware": 0, 00:29:31.531 "ns_manage": 0 00:29:31.531 }, 00:29:31.531 "multi_ctrlr": true, 00:29:31.531 "ana_reporting": false 00:29:31.531 }, 00:29:31.531 "vs": { 00:29:31.531 "nvme_version": "1.3" 00:29:31.531 }, 00:29:31.531 "ns_data": { 00:29:31.531 "id": 1, 00:29:31.531 "can_share": true 00:29:31.531 } 00:29:31.531 } 00:29:31.531 ], 00:29:31.531 "mp_policy": "active_passive" 00:29:31.531 } 00:29:31.531 } 00:29:31.531 ] 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.531 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.glsK5A6JMZ 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:31.789 rmmod nvme_tcp 00:29:31.789 rmmod nvme_fabrics 00:29:31.789 rmmod nvme_keyring 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 275642 ']' 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 275642 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 275642 ']' 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 275642 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275642 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275642' 00:29:31.789 killing process with pid 275642 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 275642 00:29:31.789 23:54:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 275642 00:29:32.047 23:54:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.047 23:54:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.047 23:54:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.047 23:54:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:32.047 23:54:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:32.047 23:54:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.047 23:54:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.047 23:54:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.047 23:54:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.047 23:54:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.047 23:54:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.047 23:54:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.005 23:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.005 00:29:34.005 real 0m5.569s 00:29:34.005 user 0m2.118s 00:29:34.005 sys 0m1.886s 00:29:34.005 23:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:34.005 23:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:34.005 ************************************ 00:29:34.005 END TEST nvmf_async_init 00:29:34.005 ************************************ 00:29:34.005 23:54:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:34.005 23:54:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:34.005 23:54:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:34.005 23:54:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.005 ************************************ 00:29:34.005 START TEST dma 00:29:34.005 ************************************ 00:29:34.005 23:54:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:34.005 * Looking for test storage... 00:29:34.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:34.005 23:54:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:34.005 23:54:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:29:34.005 23:54:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:34.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.264 --rc genhtml_branch_coverage=1 00:29:34.264 --rc genhtml_function_coverage=1 00:29:34.264 --rc genhtml_legend=1 00:29:34.264 --rc geninfo_all_blocks=1 00:29:34.264 --rc geninfo_unexecuted_blocks=1 00:29:34.264 00:29:34.264 ' 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:34.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.264 --rc genhtml_branch_coverage=1 00:29:34.264 --rc genhtml_function_coverage=1 00:29:34.264 --rc genhtml_legend=1 00:29:34.264 --rc geninfo_all_blocks=1 00:29:34.264 --rc geninfo_unexecuted_blocks=1 00:29:34.264 00:29:34.264 ' 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:34.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.264 --rc genhtml_branch_coverage=1 00:29:34.264 --rc genhtml_function_coverage=1 00:29:34.264 --rc genhtml_legend=1 00:29:34.264 --rc geninfo_all_blocks=1 00:29:34.264 --rc geninfo_unexecuted_blocks=1 00:29:34.264 00:29:34.264 ' 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:34.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.264 --rc genhtml_branch_coverage=1 00:29:34.264 --rc genhtml_function_coverage=1 00:29:34.264 --rc genhtml_legend=1 00:29:34.264 --rc geninfo_all_blocks=1 00:29:34.264 --rc geninfo_unexecuted_blocks=1 00:29:34.264 00:29:34.264 ' 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.264 23:54:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:34.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:34.265 00:29:34.265 real 0m0.184s 00:29:34.265 user 0m0.126s 00:29:34.265 sys 0m0.067s 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:34.265 ************************************ 00:29:34.265 END TEST dma 00:29:34.265 ************************************ 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.265 ************************************ 00:29:34.265 START TEST nvmf_identify 00:29:34.265 ************************************ 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:34.265 * Looking for test storage... 00:29:34.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:29:34.265 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:34.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.524 --rc genhtml_branch_coverage=1 00:29:34.524 --rc genhtml_function_coverage=1 00:29:34.524 --rc genhtml_legend=1 00:29:34.524 --rc geninfo_all_blocks=1 00:29:34.524 --rc geninfo_unexecuted_blocks=1 00:29:34.524 00:29:34.524 ' 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:34.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.524 --rc genhtml_branch_coverage=1 00:29:34.524 --rc genhtml_function_coverage=1 00:29:34.524 --rc genhtml_legend=1 00:29:34.524 --rc geninfo_all_blocks=1 00:29:34.524 --rc geninfo_unexecuted_blocks=1 00:29:34.524 00:29:34.524 ' 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:34.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.524 --rc genhtml_branch_coverage=1 00:29:34.524 --rc genhtml_function_coverage=1 00:29:34.524 --rc genhtml_legend=1 00:29:34.524 --rc geninfo_all_blocks=1 00:29:34.524 --rc geninfo_unexecuted_blocks=1 00:29:34.524 00:29:34.524 ' 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:34.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.524 --rc genhtml_branch_coverage=1 00:29:34.524 --rc genhtml_function_coverage=1 00:29:34.524 --rc genhtml_legend=1 00:29:34.524 --rc geninfo_all_blocks=1 00:29:34.524 --rc geninfo_unexecuted_blocks=1 00:29:34.524 00:29:34.524 ' 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.524 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:34.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:34.525 23:54:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.429 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:36.430 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:36.430 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:36.430 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:36.430 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:29:36.430 00:29:36.430 --- 10.0.0.2 ping statistics --- 00:29:36.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.430 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:29:36.430 00:29:36.430 --- 10.0.0.1 ping statistics --- 00:29:36.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.430 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:36.430 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=278293 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 278293 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 278293 ']' 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.689 23:54:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.689 [2024-11-19 23:54:10.809877] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:29:36.689 [2024-11-19 23:54:10.809969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.689 [2024-11-19 23:54:10.884942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:36.689 [2024-11-19 23:54:10.933541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.689 [2024-11-19 23:54:10.933594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.689 [2024-11-19 23:54:10.933627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.689 [2024-11-19 23:54:10.933645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.689 [2024-11-19 23:54:10.933659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.689 [2024-11-19 23:54:10.935350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.689 [2024-11-19 23:54:10.935406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.689 [2024-11-19 23:54:10.935455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.689 [2024-11-19 23:54:10.935457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.949 [2024-11-19 23:54:11.057809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.949 Malloc0 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.949 [2024-11-19 23:54:11.147297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:36.949 [ 00:29:36.949 { 00:29:36.949 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:36.949 "subtype": "Discovery", 00:29:36.949 "listen_addresses": [ 00:29:36.949 { 00:29:36.949 "trtype": "TCP", 00:29:36.949 "adrfam": "IPv4", 00:29:36.949 "traddr": "10.0.0.2", 00:29:36.949 "trsvcid": "4420" 00:29:36.949 } 00:29:36.949 ], 00:29:36.949 "allow_any_host": true, 00:29:36.949 "hosts": [] 00:29:36.949 }, 00:29:36.949 { 00:29:36.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.949 "subtype": "NVMe", 00:29:36.949 "listen_addresses": [ 00:29:36.949 { 00:29:36.949 "trtype": "TCP", 00:29:36.949 "adrfam": "IPv4", 00:29:36.949 "traddr": "10.0.0.2", 00:29:36.949 "trsvcid": "4420" 00:29:36.949 } 00:29:36.949 ], 00:29:36.949 "allow_any_host": true, 00:29:36.949 "hosts": [], 00:29:36.949 "serial_number": "SPDK00000000000001", 00:29:36.949 "model_number": "SPDK bdev Controller", 00:29:36.949 "max_namespaces": 32, 00:29:36.949 "min_cntlid": 1, 00:29:36.949 "max_cntlid": 65519, 00:29:36.949 "namespaces": [ 00:29:36.949 { 00:29:36.949 "nsid": 1, 00:29:36.949 "bdev_name": "Malloc0", 00:29:36.949 "name": "Malloc0", 00:29:36.949 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:36.949 "eui64": "ABCDEF0123456789", 00:29:36.949 "uuid": "9929ac37-c72a-4671-b80b-be27670a156d" 00:29:36.949 } 00:29:36.949 ] 00:29:36.949 } 00:29:36.949 ] 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.949 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:36.949 [2024-11-19 23:54:11.189991] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:29:36.949 [2024-11-19 23:54:11.190034] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278435 ] 00:29:36.949 [2024-11-19 23:54:11.240653] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:36.949 [2024-11-19 23:54:11.240724] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:36.949 [2024-11-19 23:54:11.240734] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:36.949 [2024-11-19 23:54:11.240750] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:36.949 [2024-11-19 23:54:11.240767] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:36.949 [2024-11-19 23:54:11.244517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:36.949 [2024-11-19 23:54:11.244583] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5c5650 0 00:29:36.949 [2024-11-19 23:54:11.244721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:36.949 [2024-11-19 23:54:11.244736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:36.949 [2024-11-19 23:54:11.244745] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:36.949 [2024-11-19 23:54:11.244751] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:36.950 [2024-11-19 23:54:11.244791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.244804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.244811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c5650) 00:29:36.950 [2024-11-19 23:54:11.244829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:36.950 [2024-11-19 23:54:11.244855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61ff40, cid 0, qid 0 00:29:36.950 [2024-11-19 23:54:11.252084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.950 [2024-11-19 23:54:11.252102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.950 [2024-11-19 23:54:11.252110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x61ff40) on tqpair=0x5c5650 00:29:36.950 [2024-11-19 23:54:11.252136] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:36.950 [2024-11-19 23:54:11.252148] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:36.950 [2024-11-19 23:54:11.252157] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:36.950 [2024-11-19 23:54:11.252179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c5650) 00:29:36.950 [2024-11-19 23:54:11.252205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.950 [2024-11-19 23:54:11.252230] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61ff40, cid 0, qid 0 00:29:36.950 [2024-11-19 23:54:11.252330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.950 [2024-11-19 23:54:11.252342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.950 [2024-11-19 23:54:11.252349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x61ff40) on tqpair=0x5c5650 00:29:36.950 [2024-11-19 23:54:11.252365] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:36.950 [2024-11-19 23:54:11.252377] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:36.950 [2024-11-19 23:54:11.252390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c5650) 00:29:36.950 [2024-11-19 23:54:11.252414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.950 [2024-11-19 23:54:11.252435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61ff40, cid 0, qid 0 00:29:36.950 [2024-11-19 23:54:11.252513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.950 [2024-11-19 23:54:11.252525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.950 [2024-11-19 23:54:11.252531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x61ff40) on tqpair=0x5c5650 00:29:36.950 [2024-11-19 23:54:11.252547] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:36.950 [2024-11-19 23:54:11.252561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:36.950 [2024-11-19 23:54:11.252573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c5650) 00:29:36.950 [2024-11-19 23:54:11.252597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.950 [2024-11-19 23:54:11.252618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61ff40, cid 0, qid 0 00:29:36.950 [2024-11-19 23:54:11.252702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.950 [2024-11-19 23:54:11.252717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.950 [2024-11-19 23:54:11.252723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x61ff40) on tqpair=0x5c5650 00:29:36.950 [2024-11-19 23:54:11.252739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:36.950 [2024-11-19 23:54:11.252755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c5650) 00:29:36.950 [2024-11-19 23:54:11.252781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.950 [2024-11-19 23:54:11.252801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61ff40, cid 0, qid 0 00:29:36.950 [2024-11-19 23:54:11.252878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.950 [2024-11-19 23:54:11.252891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.950 [2024-11-19 23:54:11.252897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.252904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x61ff40) on tqpair=0x5c5650 00:29:36.950 [2024-11-19 23:54:11.252912] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:36.950 [2024-11-19 23:54:11.252920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:36.950 [2024-11-19 23:54:11.252932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:36.950 [2024-11-19 23:54:11.253042] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:36.950 [2024-11-19 23:54:11.253050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:36.950 [2024-11-19 23:54:11.253065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.253080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.253087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c5650) 00:29:36.950 [2024-11-19 23:54:11.253098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.950 [2024-11-19 23:54:11.253120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61ff40, cid 0, qid 0 00:29:36.950 [2024-11-19 23:54:11.253215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.950 [2024-11-19 23:54:11.253229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.950 [2024-11-19 23:54:11.253236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.253243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x61ff40) on tqpair=0x5c5650 00:29:36.950 [2024-11-19 23:54:11.253251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:36.950 [2024-11-19 23:54:11.253267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.253276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.253283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c5650) 00:29:36.950 [2024-11-19 23:54:11.253293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.950 [2024-11-19 23:54:11.253318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61ff40, cid 0, qid 0 00:29:36.950 [2024-11-19 23:54:11.253401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.950 [2024-11-19 23:54:11.253415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.950 [2024-11-19 23:54:11.253422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.253428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x61ff40) on tqpair=0x5c5650 00:29:36.950 [2024-11-19 23:54:11.253435] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:36.950 [2024-11-19 23:54:11.253444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:36.950 [2024-11-19 23:54:11.253457] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:36.950 [2024-11-19 23:54:11.253474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:36.950 [2024-11-19 23:54:11.253489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.253496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c5650) 00:29:36.950 [2024-11-19 23:54:11.253507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.950 [2024-11-19 23:54:11.253528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61ff40, cid 0, qid 0 00:29:36.950 [2024-11-19 23:54:11.253657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.950 [2024-11-19 23:54:11.253669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.950 [2024-11-19 23:54:11.253676] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.253683] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c5650): datao=0, datal=4096, cccid=0 00:29:36.950 [2024-11-19 23:54:11.253690] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x61ff40) on tqpair(0x5c5650): expected_datao=0, payload_size=4096 00:29:36.950 [2024-11-19 23:54:11.253697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.950 [2024-11-19 23:54:11.253715] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.253724] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.253736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.951 [2024-11-19 23:54:11.253745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.951 [2024-11-19 23:54:11.253752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.253758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x61ff40) on tqpair=0x5c5650 00:29:36.951 [2024-11-19 23:54:11.253770] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:36.951 [2024-11-19 23:54:11.253778] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:36.951 [2024-11-19 23:54:11.253786] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:36.951 [2024-11-19 23:54:11.253800] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:36.951 [2024-11-19 23:54:11.253809] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:36.951 [2024-11-19 23:54:11.253817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:36.951 [2024-11-19 23:54:11.253835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:36.951 [2024-11-19 23:54:11.253852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.253860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.253866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c5650) 00:29:36.951 [2024-11-19 23:54:11.253877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:36.951 [2024-11-19 23:54:11.253899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61ff40, cid 0, qid 0 00:29:36.951 [2024-11-19 23:54:11.253992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.951 [2024-11-19 23:54:11.254006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.951 [2024-11-19 23:54:11.254013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x61ff40) on tqpair=0x5c5650 00:29:36.951 [2024-11-19 23:54:11.254031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5c5650) 00:29:36.951 [2024-11-19 23:54:11.254054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.951 [2024-11-19 23:54:11.254065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5c5650) 00:29:36.951 [2024-11-19 23:54:11.254095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.951 [2024-11-19 23:54:11.254105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5c5650) 00:29:36.951 [2024-11-19 23:54:11.254126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.951 [2024-11-19 23:54:11.254136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:36.951 [2024-11-19 23:54:11.254157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.951 [2024-11-19 23:54:11.254165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:36.951 [2024-11-19 23:54:11.254180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:36.951 [2024-11-19 23:54:11.254191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5c5650) 00:29:36.951 [2024-11-19 23:54:11.254208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.951 [2024-11-19 23:54:11.254231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x61ff40, cid 0, qid 0 00:29:36.951 [2024-11-19 23:54:11.254242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6200c0, cid 1, qid 0 00:29:36.951 [2024-11-19 23:54:11.254250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x620240, cid 2, qid 0 00:29:36.951 [2024-11-19 23:54:11.254262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:36.951 [2024-11-19 23:54:11.254270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x620540, cid 4, qid 0 00:29:36.951 [2024-11-19 23:54:11.254376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:36.951 [2024-11-19 23:54:11.254390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:36.951 [2024-11-19 23:54:11.254397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x620540) on tqpair=0x5c5650 00:29:36.951 [2024-11-19 23:54:11.254417] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:36.951 [2024-11-19 23:54:11.254427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:36.951 [2024-11-19 23:54:11.254444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5c5650) 00:29:36.951 [2024-11-19 23:54:11.254464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.951 [2024-11-19 23:54:11.254485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x620540, cid 4, qid 0 00:29:36.951 [2024-11-19 23:54:11.254588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:36.951 [2024-11-19 23:54:11.254603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:36.951 [2024-11-19 23:54:11.254610] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254616] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c5650): datao=0, datal=4096, cccid=4 00:29:36.951 [2024-11-19 23:54:11.254623] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x620540) on tqpair(0x5c5650): expected_datao=0, payload_size=4096 00:29:36.951 [2024-11-19 23:54:11.254630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254646] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:36.951 [2024-11-19 23:54:11.254655] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:37.216 [2024-11-19 23:54:11.299083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.216 [2024-11-19 23:54:11.299102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.216 [2024-11-19 23:54:11.299109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.216 [2024-11-19 23:54:11.299116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x620540) on tqpair=0x5c5650 00:29:37.216 [2024-11-19 23:54:11.299135] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:37.216 [2024-11-19 23:54:11.299174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.216 [2024-11-19 23:54:11.299185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5c5650) 00:29:37.216 [2024-11-19 23:54:11.299197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.216 [2024-11-19 23:54:11.299209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.216 [2024-11-19 23:54:11.299216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.216 [2024-11-19 23:54:11.299222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5c5650) 00:29:37.216 [2024-11-19 23:54:11.299231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.216 [2024-11-19 23:54:11.299259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x620540, cid 4, qid 0 00:29:37.216 [2024-11-19 23:54:11.299272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6206c0, cid 5, qid 0 00:29:37.216 [2024-11-19 23:54:11.299421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:37.217 [2024-11-19 23:54:11.299440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:37.217 [2024-11-19 23:54:11.299448] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.299454] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c5650): datao=0, datal=1024, cccid=4 00:29:37.217 [2024-11-19 23:54:11.299461] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x620540) on tqpair(0x5c5650): expected_datao=0, payload_size=1024 00:29:37.217 [2024-11-19 23:54:11.299469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.299478] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.299486] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.299494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.217 [2024-11-19 23:54:11.299503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.217 [2024-11-19 23:54:11.299509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.299515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6206c0) on tqpair=0x5c5650 00:29:37.217 [2024-11-19 23:54:11.341153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.217 [2024-11-19 23:54:11.341173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.217 [2024-11-19 23:54:11.341180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.341187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x620540) on tqpair=0x5c5650 00:29:37.217 [2024-11-19 23:54:11.341204] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.341213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5c5650) 00:29:37.217 [2024-11-19 23:54:11.341225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.217 [2024-11-19 23:54:11.341254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x620540, cid 4, qid 0 00:29:37.217 [2024-11-19 23:54:11.341354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:37.217 [2024-11-19 23:54:11.341367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:37.217 [2024-11-19 23:54:11.341374] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.341380] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c5650): datao=0, datal=3072, cccid=4 00:29:37.217 [2024-11-19 23:54:11.341391] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x620540) on tqpair(0x5c5650): expected_datao=0, payload_size=3072 00:29:37.217 [2024-11-19 23:54:11.341398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.341416] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.341424] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.341436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.217 [2024-11-19 23:54:11.341445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.217 [2024-11-19 23:54:11.341452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.341458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x620540) on tqpair=0x5c5650 00:29:37.217 [2024-11-19 23:54:11.341472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.341480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5c5650) 00:29:37.217 [2024-11-19 23:54:11.341490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.217 [2024-11-19 23:54:11.341518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x620540, cid 4, qid 0 00:29:37.217 [2024-11-19 23:54:11.341616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:37.217 [2024-11-19 23:54:11.341628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:37.217 [2024-11-19 23:54:11.341639] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.341646] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5c5650): datao=0, datal=8, cccid=4 00:29:37.217 [2024-11-19 23:54:11.341654] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x620540) on tqpair(0x5c5650): expected_datao=0, payload_size=8 00:29:37.217 [2024-11-19 23:54:11.341661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.341671] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.341678] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.387098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.217 [2024-11-19 23:54:11.387117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.217 [2024-11-19 23:54:11.387124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.217 [2024-11-19 23:54:11.387131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x620540) on tqpair=0x5c5650 00:29:37.217 ===================================================== 00:29:37.217 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:37.217 ===================================================== 00:29:37.217 Controller Capabilities/Features 00:29:37.217 ================================ 00:29:37.217 Vendor ID: 0000 00:29:37.217 Subsystem Vendor ID: 0000 00:29:37.217 Serial Number: .................... 00:29:37.217 Model Number: ........................................ 00:29:37.217 Firmware Version: 25.01 00:29:37.217 Recommended Arb Burst: 0 00:29:37.217 IEEE OUI Identifier: 00 00 00 00:29:37.217 Multi-path I/O 00:29:37.217 May have multiple subsystem ports: No 00:29:37.217 May have multiple controllers: No 00:29:37.217 Associated with SR-IOV VF: No 00:29:37.217 Max Data Transfer Size: 131072 00:29:37.217 Max Number of Namespaces: 0 00:29:37.217 Max Number of I/O Queues: 1024 00:29:37.217 NVMe Specification Version (VS): 1.3 00:29:37.217 NVMe Specification Version (Identify): 1.3 00:29:37.217 Maximum Queue Entries: 128 00:29:37.217 Contiguous Queues Required: Yes 00:29:37.217 Arbitration Mechanisms Supported 00:29:37.217 Weighted Round Robin: Not Supported 00:29:37.217 Vendor Specific: Not Supported 00:29:37.217 Reset Timeout: 15000 ms 00:29:37.217 Doorbell Stride: 4 bytes 00:29:37.217 NVM Subsystem Reset: Not Supported 00:29:37.217 Command Sets Supported 00:29:37.217 NVM Command Set: Supported 00:29:37.217 Boot Partition: Not Supported 00:29:37.217 Memory Page Size Minimum: 4096 bytes 00:29:37.217 Memory Page Size Maximum: 4096 bytes 00:29:37.217 Persistent Memory Region: Not Supported 00:29:37.217 Optional Asynchronous Events Supported 00:29:37.217 Namespace Attribute Notices: Not Supported 00:29:37.217 Firmware Activation Notices: Not Supported 00:29:37.217 ANA Change Notices: Not Supported 00:29:37.217 PLE Aggregate Log Change Notices: Not Supported 00:29:37.217 LBA Status Info Alert Notices: Not Supported 00:29:37.217 EGE Aggregate Log Change Notices: Not Supported 00:29:37.217 Normal NVM Subsystem Shutdown event: Not Supported 00:29:37.217 Zone Descriptor Change Notices: Not Supported 00:29:37.217 Discovery Log Change Notices: Supported 00:29:37.217 Controller Attributes 00:29:37.217 128-bit Host Identifier: Not Supported 00:29:37.217 Non-Operational Permissive Mode: Not Supported 00:29:37.217 NVM Sets: Not Supported 00:29:37.217 Read Recovery Levels: Not Supported 00:29:37.217 Endurance Groups: Not Supported 00:29:37.217 Predictable Latency Mode: Not Supported 00:29:37.217 Traffic Based Keep ALive: Not Supported 00:29:37.217 Namespace Granularity: Not Supported 00:29:37.217 SQ Associations: Not Supported 00:29:37.217 UUID List: Not Supported 00:29:37.217 Multi-Domain Subsystem: Not Supported 00:29:37.217 Fixed Capacity Management: Not Supported 00:29:37.217 Variable Capacity Management: Not Supported 00:29:37.217 Delete Endurance Group: Not Supported 00:29:37.217 Delete NVM Set: Not Supported 00:29:37.217 Extended LBA Formats Supported: Not Supported 00:29:37.217 Flexible Data Placement Supported: Not Supported 00:29:37.217 00:29:37.217 Controller Memory Buffer Support 00:29:37.217 ================================ 00:29:37.217 Supported: No 00:29:37.217 00:29:37.217 Persistent Memory Region Support 00:29:37.217 ================================ 00:29:37.217 Supported: No 00:29:37.217 00:29:37.217 Admin Command Set Attributes 00:29:37.217 ============================ 00:29:37.217 Security Send/Receive: Not Supported 00:29:37.218 Format NVM: Not Supported 00:29:37.218 Firmware Activate/Download: Not Supported 00:29:37.218 Namespace Management: Not Supported 00:29:37.218 Device Self-Test: Not Supported 00:29:37.218 Directives: Not Supported 00:29:37.218 NVMe-MI: Not Supported 00:29:37.218 Virtualization Management: Not Supported 00:29:37.218 Doorbell Buffer Config: Not Supported 00:29:37.218 Get LBA Status Capability: Not Supported 00:29:37.218 Command & Feature Lockdown Capability: Not Supported 00:29:37.218 Abort Command Limit: 1 00:29:37.218 Async Event Request Limit: 4 00:29:37.218 Number of Firmware Slots: N/A 00:29:37.218 Firmware Slot 1 Read-Only: N/A 00:29:37.218 Firmware Activation Without Reset: N/A 00:29:37.218 Multiple Update Detection Support: N/A 00:29:37.218 Firmware Update Granularity: No Information Provided 00:29:37.218 Per-Namespace SMART Log: No 00:29:37.218 Asymmetric Namespace Access Log Page: Not Supported 00:29:37.218 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:37.218 Command Effects Log Page: Not Supported 00:29:37.218 Get Log Page Extended Data: Supported 00:29:37.218 Telemetry Log Pages: Not Supported 00:29:37.218 Persistent Event Log Pages: Not Supported 00:29:37.218 Supported Log Pages Log Page: May Support 00:29:37.218 Commands Supported & Effects Log Page: Not Supported 00:29:37.218 Feature Identifiers & Effects Log Page:May Support 00:29:37.218 NVMe-MI Commands & Effects Log Page: May Support 00:29:37.218 Data Area 4 for Telemetry Log: Not Supported 00:29:37.218 Error Log Page Entries Supported: 128 00:29:37.218 Keep Alive: Not Supported 00:29:37.218 00:29:37.218 NVM Command Set Attributes 00:29:37.218 ========================== 00:29:37.218 Submission Queue Entry Size 00:29:37.218 Max: 1 00:29:37.218 Min: 1 00:29:37.218 Completion Queue Entry Size 00:29:37.218 Max: 1 00:29:37.218 Min: 1 00:29:37.218 Number of Namespaces: 0 00:29:37.218 Compare Command: Not Supported 00:29:37.218 Write Uncorrectable Command: Not Supported 00:29:37.218 Dataset Management Command: Not Supported 00:29:37.218 Write Zeroes Command: Not Supported 00:29:37.218 Set Features Save Field: Not Supported 00:29:37.218 Reservations: Not Supported 00:29:37.218 Timestamp: Not Supported 00:29:37.218 Copy: Not Supported 00:29:37.218 Volatile Write Cache: Not Present 00:29:37.218 Atomic Write Unit (Normal): 1 00:29:37.218 Atomic Write Unit (PFail): 1 00:29:37.218 Atomic Compare & Write Unit: 1 00:29:37.218 Fused Compare & Write: Supported 00:29:37.218 Scatter-Gather List 00:29:37.218 SGL Command Set: Supported 00:29:37.218 SGL Keyed: Supported 00:29:37.218 SGL Bit Bucket Descriptor: Not Supported 00:29:37.218 SGL Metadata Pointer: Not Supported 00:29:37.218 Oversized SGL: Not Supported 00:29:37.218 SGL Metadata Address: Not Supported 00:29:37.218 SGL Offset: Supported 00:29:37.218 Transport SGL Data Block: Not Supported 00:29:37.218 Replay Protected Memory Block: Not Supported 00:29:37.218 00:29:37.218 Firmware Slot Information 00:29:37.218 ========================= 00:29:37.218 Active slot: 0 00:29:37.218 00:29:37.218 00:29:37.218 Error Log 00:29:37.218 ========= 00:29:37.218 00:29:37.218 Active Namespaces 00:29:37.218 ================= 00:29:37.218 Discovery Log Page 00:29:37.218 ================== 00:29:37.218 Generation Counter: 2 00:29:37.218 Number of Records: 2 00:29:37.218 Record Format: 0 00:29:37.218 00:29:37.218 Discovery Log Entry 0 00:29:37.218 ---------------------- 00:29:37.218 Transport Type: 3 (TCP) 00:29:37.218 Address Family: 1 (IPv4) 00:29:37.218 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:37.218 Entry Flags: 00:29:37.218 Duplicate Returned Information: 1 00:29:37.218 Explicit Persistent Connection Support for Discovery: 1 00:29:37.218 Transport Requirements: 00:29:37.218 Secure Channel: Not Required 00:29:37.218 Port ID: 0 (0x0000) 00:29:37.218 Controller ID: 65535 (0xffff) 00:29:37.218 Admin Max SQ Size: 128 00:29:37.218 Transport Service Identifier: 4420 00:29:37.218 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:37.218 Transport Address: 10.0.0.2 00:29:37.218 Discovery Log Entry 1 00:29:37.218 ---------------------- 00:29:37.218 Transport Type: 3 (TCP) 00:29:37.218 Address Family: 1 (IPv4) 00:29:37.218 Subsystem Type: 2 (NVM Subsystem) 00:29:37.218 Entry Flags: 00:29:37.218 Duplicate Returned Information: 0 00:29:37.218 Explicit Persistent Connection Support for Discovery: 0 00:29:37.218 Transport Requirements: 00:29:37.218 Secure Channel: Not Required 00:29:37.218 Port ID: 0 (0x0000) 00:29:37.218 Controller ID: 65535 (0xffff) 00:29:37.218 Admin Max SQ Size: 128 00:29:37.218 Transport Service Identifier: 4420 00:29:37.218 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:37.218 Transport Address: 10.0.0.2 [2024-11-19 23:54:11.387255] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:37.218 [2024-11-19 23:54:11.387277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x61ff40) on tqpair=0x5c5650 00:29:37.218 [2024-11-19 23:54:11.387289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.218 [2024-11-19 23:54:11.387298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6200c0) on tqpair=0x5c5650 00:29:37.218 [2024-11-19 23:54:11.387306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.218 [2024-11-19 23:54:11.387314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x620240) on tqpair=0x5c5650 00:29:37.218 [2024-11-19 23:54:11.387321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.218 [2024-11-19 23:54:11.387329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.218 [2024-11-19 23:54:11.387336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.218 [2024-11-19 23:54:11.387354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.218 [2024-11-19 23:54:11.387363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.218 [2024-11-19 23:54:11.387370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.218 [2024-11-19 23:54:11.387381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.218 [2024-11-19 23:54:11.387405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.218 [2024-11-19 23:54:11.387493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.218 [2024-11-19 23:54:11.387507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.218 [2024-11-19 23:54:11.387515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.218 [2024-11-19 23:54:11.387521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.218 [2024-11-19 23:54:11.387532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.218 [2024-11-19 23:54:11.387540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.218 [2024-11-19 23:54:11.387546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.218 [2024-11-19 23:54:11.387557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.218 [2024-11-19 23:54:11.387583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.218 [2024-11-19 23:54:11.387690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.218 [2024-11-19 23:54:11.387706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.218 [2024-11-19 23:54:11.387714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.218 [2024-11-19 23:54:11.387720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.218 [2024-11-19 23:54:11.387728] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:37.218 [2024-11-19 23:54:11.387736] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:37.218 [2024-11-19 23:54:11.387751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.218 [2024-11-19 23:54:11.387760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.218 [2024-11-19 23:54:11.387766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.218 [2024-11-19 23:54:11.387777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.218 [2024-11-19 23:54:11.387797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.218 [2024-11-19 23:54:11.387872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.218 [2024-11-19 23:54:11.387884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.218 [2024-11-19 23:54:11.387890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.218 [2024-11-19 23:54:11.387897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.219 [2024-11-19 23:54:11.387913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.387922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.387929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.219 [2024-11-19 23:54:11.387939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.219 [2024-11-19 23:54:11.387959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.219 [2024-11-19 23:54:11.388040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.219 [2024-11-19 23:54:11.388054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.219 [2024-11-19 23:54:11.388061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.219 [2024-11-19 23:54:11.388095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.219 [2024-11-19 23:54:11.388122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.219 [2024-11-19 23:54:11.388143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.219 [2024-11-19 23:54:11.388224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.219 [2024-11-19 23:54:11.388239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.219 [2024-11-19 23:54:11.388245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.219 [2024-11-19 23:54:11.388268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.219 [2024-11-19 23:54:11.388294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.219 [2024-11-19 23:54:11.388314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.219 [2024-11-19 23:54:11.388395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.219 [2024-11-19 23:54:11.388409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.219 [2024-11-19 23:54:11.388416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.219 [2024-11-19 23:54:11.388439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.219 [2024-11-19 23:54:11.388464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.219 [2024-11-19 23:54:11.388485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.219 [2024-11-19 23:54:11.388562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.219 [2024-11-19 23:54:11.388574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.219 [2024-11-19 23:54:11.388581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.219 [2024-11-19 23:54:11.388603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.219 [2024-11-19 23:54:11.388629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.219 [2024-11-19 23:54:11.388649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.219 [2024-11-19 23:54:11.388725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.219 [2024-11-19 23:54:11.388739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.219 [2024-11-19 23:54:11.388746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.219 [2024-11-19 23:54:11.388769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.219 [2024-11-19 23:54:11.388795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.219 [2024-11-19 23:54:11.388815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.219 [2024-11-19 23:54:11.388890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.219 [2024-11-19 23:54:11.388905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.219 [2024-11-19 23:54:11.388912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.219 [2024-11-19 23:54:11.388934] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.388950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.219 [2024-11-19 23:54:11.388960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.219 [2024-11-19 23:54:11.388980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.219 [2024-11-19 23:54:11.389060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.219 [2024-11-19 23:54:11.389086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.219 [2024-11-19 23:54:11.389095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.219 [2024-11-19 23:54:11.389118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.219 [2024-11-19 23:54:11.389144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.219 [2024-11-19 23:54:11.389165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.219 [2024-11-19 23:54:11.389240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.219 [2024-11-19 23:54:11.389252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.219 [2024-11-19 23:54:11.389259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.219 [2024-11-19 23:54:11.389281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.219 [2024-11-19 23:54:11.389306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.219 [2024-11-19 23:54:11.389327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.219 [2024-11-19 23:54:11.389403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.219 [2024-11-19 23:54:11.389417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.219 [2024-11-19 23:54:11.389423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.219 [2024-11-19 23:54:11.389446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.219 [2024-11-19 23:54:11.389472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.219 [2024-11-19 23:54:11.389492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.219 [2024-11-19 23:54:11.389571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.219 [2024-11-19 23:54:11.389585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.219 [2024-11-19 23:54:11.389592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.219 [2024-11-19 23:54:11.389615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.219 [2024-11-19 23:54:11.389640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.219 [2024-11-19 23:54:11.389661] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.219 [2024-11-19 23:54:11.389740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.219 [2024-11-19 23:54:11.389754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.219 [2024-11-19 23:54:11.389765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.219 [2024-11-19 23:54:11.389789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.219 [2024-11-19 23:54:11.389804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.219 [2024-11-19 23:54:11.389814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.220 [2024-11-19 23:54:11.389835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.220 [2024-11-19 23:54:11.389909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.220 [2024-11-19 23:54:11.389921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.220 [2024-11-19 23:54:11.389928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.389935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.220 [2024-11-19 23:54:11.389950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.389960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.389966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.220 [2024-11-19 23:54:11.389976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.220 [2024-11-19 23:54:11.389996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.220 [2024-11-19 23:54:11.390084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.220 [2024-11-19 23:54:11.390097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.220 [2024-11-19 23:54:11.390104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.220 [2024-11-19 23:54:11.390127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.220 [2024-11-19 23:54:11.390152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.220 [2024-11-19 23:54:11.390173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.220 [2024-11-19 23:54:11.390252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.220 [2024-11-19 23:54:11.390266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.220 [2024-11-19 23:54:11.390273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.220 [2024-11-19 23:54:11.390296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.220 [2024-11-19 23:54:11.390322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.220 [2024-11-19 23:54:11.390342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.220 [2024-11-19 23:54:11.390425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.220 [2024-11-19 23:54:11.390439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.220 [2024-11-19 23:54:11.390446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.220 [2024-11-19 23:54:11.390473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.220 [2024-11-19 23:54:11.390499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.220 [2024-11-19 23:54:11.390520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.220 [2024-11-19 23:54:11.390592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.220 [2024-11-19 23:54:11.390604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.220 [2024-11-19 23:54:11.390610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.220 [2024-11-19 23:54:11.390633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.220 [2024-11-19 23:54:11.390658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.220 [2024-11-19 23:54:11.390678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.220 [2024-11-19 23:54:11.390763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.220 [2024-11-19 23:54:11.390775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.220 [2024-11-19 23:54:11.390782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.220 [2024-11-19 23:54:11.390804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.220 [2024-11-19 23:54:11.390830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.220 [2024-11-19 23:54:11.390850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.220 [2024-11-19 23:54:11.390929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.220 [2024-11-19 23:54:11.390943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.220 [2024-11-19 23:54:11.390950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.220 [2024-11-19 23:54:11.390972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.390987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.220 [2024-11-19 23:54:11.390997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.220 [2024-11-19 23:54:11.391018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.220 [2024-11-19 23:54:11.395083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.220 [2024-11-19 23:54:11.395100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.220 [2024-11-19 23:54:11.395107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.395113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.220 [2024-11-19 23:54:11.395135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.395146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.395152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5c5650) 00:29:37.220 [2024-11-19 23:54:11.395163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.220 [2024-11-19 23:54:11.395185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6203c0, cid 3, qid 0 00:29:37.220 [2024-11-19 23:54:11.395279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.220 [2024-11-19 23:54:11.395291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.220 [2024-11-19 23:54:11.395298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.395304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6203c0) on tqpair=0x5c5650 00:29:37.220 [2024-11-19 23:54:11.395317] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:29:37.220 00:29:37.220 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:37.220 [2024-11-19 23:54:11.429607] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:29:37.220 [2024-11-19 23:54:11.429650] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278443 ] 00:29:37.220 [2024-11-19 23:54:11.480663] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:37.220 [2024-11-19 23:54:11.480719] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:37.220 [2024-11-19 23:54:11.480729] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:37.220 [2024-11-19 23:54:11.480746] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:37.220 [2024-11-19 23:54:11.480761] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:37.220 [2024-11-19 23:54:11.484497] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:37.220 [2024-11-19 23:54:11.484556] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1597650 0 00:29:37.220 [2024-11-19 23:54:11.484675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:37.220 [2024-11-19 23:54:11.484692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:37.220 [2024-11-19 23:54:11.484700] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:37.220 [2024-11-19 23:54:11.484706] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:37.220 [2024-11-19 23:54:11.484737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.484749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.220 [2024-11-19 23:54:11.484757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1597650) 00:29:37.220 [2024-11-19 23:54:11.484770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:37.221 [2024-11-19 23:54:11.484795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1f40, cid 0, qid 0 00:29:37.221 [2024-11-19 23:54:11.491082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.221 [2024-11-19 23:54:11.491101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.221 [2024-11-19 23:54:11.491114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f1f40) on tqpair=0x1597650 00:29:37.221 [2024-11-19 23:54:11.491137] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:37.221 [2024-11-19 23:54:11.491148] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:37.221 [2024-11-19 23:54:11.491158] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:37.221 [2024-11-19 23:54:11.491177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1597650) 00:29:37.221 [2024-11-19 23:54:11.491203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.221 [2024-11-19 23:54:11.491228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1f40, cid 0, qid 0 00:29:37.221 [2024-11-19 23:54:11.491355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.221 [2024-11-19 23:54:11.491367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.221 [2024-11-19 23:54:11.491374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f1f40) on tqpair=0x1597650 00:29:37.221 [2024-11-19 23:54:11.491389] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:37.221 [2024-11-19 23:54:11.491403] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:37.221 [2024-11-19 23:54:11.491416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1597650) 00:29:37.221 [2024-11-19 23:54:11.491440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.221 [2024-11-19 23:54:11.491461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1f40, cid 0, qid 0 00:29:37.221 [2024-11-19 23:54:11.491541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.221 [2024-11-19 23:54:11.491555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.221 [2024-11-19 23:54:11.491562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f1f40) on tqpair=0x1597650 00:29:37.221 [2024-11-19 23:54:11.491578] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:37.221 [2024-11-19 23:54:11.491592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:37.221 [2024-11-19 23:54:11.491605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1597650) 00:29:37.221 [2024-11-19 23:54:11.491629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.221 [2024-11-19 23:54:11.491650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1f40, cid 0, qid 0 00:29:37.221 [2024-11-19 23:54:11.491735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.221 [2024-11-19 23:54:11.491748] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.221 [2024-11-19 23:54:11.491754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f1f40) on tqpair=0x1597650 00:29:37.221 [2024-11-19 23:54:11.491774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:37.221 [2024-11-19 23:54:11.491791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1597650) 00:29:37.221 [2024-11-19 23:54:11.491817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.221 [2024-11-19 23:54:11.491838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1f40, cid 0, qid 0 00:29:37.221 [2024-11-19 23:54:11.491923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.221 [2024-11-19 23:54:11.491936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.221 [2024-11-19 23:54:11.491942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.491949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f1f40) on tqpair=0x1597650 00:29:37.221 [2024-11-19 23:54:11.491956] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:37.221 [2024-11-19 23:54:11.491965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:37.221 [2024-11-19 23:54:11.491978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:37.221 [2024-11-19 23:54:11.492089] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:37.221 [2024-11-19 23:54:11.492101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:37.221 [2024-11-19 23:54:11.492114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.492122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.492128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1597650) 00:29:37.221 [2024-11-19 23:54:11.492138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.221 [2024-11-19 23:54:11.492160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1f40, cid 0, qid 0 00:29:37.221 [2024-11-19 23:54:11.492275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.221 [2024-11-19 23:54:11.492289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.221 [2024-11-19 23:54:11.492296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.492303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f1f40) on tqpair=0x1597650 00:29:37.221 [2024-11-19 23:54:11.492311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:37.221 [2024-11-19 23:54:11.492328] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.492338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.492344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1597650) 00:29:37.221 [2024-11-19 23:54:11.492354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.221 [2024-11-19 23:54:11.492375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1f40, cid 0, qid 0 00:29:37.221 [2024-11-19 23:54:11.492463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.221 [2024-11-19 23:54:11.492475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.221 [2024-11-19 23:54:11.492486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.492493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f1f40) on tqpair=0x1597650 00:29:37.221 [2024-11-19 23:54:11.492501] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:37.221 [2024-11-19 23:54:11.492509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:37.221 [2024-11-19 23:54:11.492523] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:37.221 [2024-11-19 23:54:11.492542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:37.221 [2024-11-19 23:54:11.492556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.221 [2024-11-19 23:54:11.492564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1597650) 00:29:37.221 [2024-11-19 23:54:11.492574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.221 [2024-11-19 23:54:11.492595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1f40, cid 0, qid 0 00:29:37.222 [2024-11-19 23:54:11.492742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:37.222 [2024-11-19 23:54:11.492754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:37.222 [2024-11-19 23:54:11.492761] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.492767] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1597650): datao=0, datal=4096, cccid=0 00:29:37.222 [2024-11-19 23:54:11.492775] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f1f40) on tqpair(0x1597650): expected_datao=0, payload_size=4096 00:29:37.222 [2024-11-19 23:54:11.492782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.492793] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.492800] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.492812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.222 [2024-11-19 23:54:11.492822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.222 [2024-11-19 23:54:11.492828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.492834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f1f40) on tqpair=0x1597650 00:29:37.222 [2024-11-19 23:54:11.492845] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:37.222 [2024-11-19 23:54:11.492854] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:37.222 [2024-11-19 23:54:11.492861] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:37.222 [2024-11-19 23:54:11.492873] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:37.222 [2024-11-19 23:54:11.492882] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:37.222 [2024-11-19 23:54:11.492890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:37.222 [2024-11-19 23:54:11.492908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:37.222 [2024-11-19 23:54:11.492922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.492929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.492936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1597650) 00:29:37.222 [2024-11-19 23:54:11.492952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:37.222 [2024-11-19 23:54:11.492975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1f40, cid 0, qid 0 00:29:37.222 [2024-11-19 23:54:11.493065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.222 [2024-11-19 23:54:11.493086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.222 [2024-11-19 23:54:11.493094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f1f40) on tqpair=0x1597650 00:29:37.222 [2024-11-19 23:54:11.493112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1597650) 00:29:37.222 [2024-11-19 23:54:11.493135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.222 [2024-11-19 23:54:11.493145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1597650) 00:29:37.222 [2024-11-19 23:54:11.493167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.222 [2024-11-19 23:54:11.493176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1597650) 00:29:37.222 [2024-11-19 23:54:11.493198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.222 [2024-11-19 23:54:11.493207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1597650) 00:29:37.222 [2024-11-19 23:54:11.493228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.222 [2024-11-19 23:54:11.493237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:37.222 [2024-11-19 23:54:11.493252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:37.222 [2024-11-19 23:54:11.493264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1597650) 00:29:37.222 [2024-11-19 23:54:11.493281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.222 [2024-11-19 23:54:11.493303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1f40, cid 0, qid 0 00:29:37.222 [2024-11-19 23:54:11.493315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f20c0, cid 1, qid 0 00:29:37.222 [2024-11-19 23:54:11.493322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f2240, cid 2, qid 0 00:29:37.222 [2024-11-19 23:54:11.493330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f23c0, cid 3, qid 0 00:29:37.222 [2024-11-19 23:54:11.493338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f2540, cid 4, qid 0 00:29:37.222 [2024-11-19 23:54:11.493494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.222 [2024-11-19 23:54:11.493509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.222 [2024-11-19 23:54:11.493519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f2540) on tqpair=0x1597650 00:29:37.222 [2024-11-19 23:54:11.493539] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:37.222 [2024-11-19 23:54:11.493549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:37.222 [2024-11-19 23:54:11.493564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:37.222 [2024-11-19 23:54:11.493575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:37.222 [2024-11-19 23:54:11.493586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1597650) 00:29:37.222 [2024-11-19 23:54:11.493610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:37.222 [2024-11-19 23:54:11.493646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f2540, cid 4, qid 0 00:29:37.222 [2024-11-19 23:54:11.493809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.222 [2024-11-19 23:54:11.493821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.222 [2024-11-19 23:54:11.493828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f2540) on tqpair=0x1597650 00:29:37.222 [2024-11-19 23:54:11.493903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:37.222 [2024-11-19 23:54:11.493924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:37.222 [2024-11-19 23:54:11.493938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.493946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1597650) 00:29:37.222 [2024-11-19 23:54:11.493956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.222 [2024-11-19 23:54:11.493977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f2540, cid 4, qid 0 00:29:37.222 [2024-11-19 23:54:11.494098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:37.222 [2024-11-19 23:54:11.494113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:37.222 [2024-11-19 23:54:11.494120] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.494126] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1597650): datao=0, datal=4096, cccid=4 00:29:37.222 [2024-11-19 23:54:11.494134] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f2540) on tqpair(0x1597650): expected_datao=0, payload_size=4096 00:29:37.222 [2024-11-19 23:54:11.494141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.494151] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.494159] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:37.222 [2024-11-19 23:54:11.494171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.222 [2024-11-19 23:54:11.494181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.222 [2024-11-19 23:54:11.494187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.494193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f2540) on tqpair=0x1597650 00:29:37.223 [2024-11-19 23:54:11.494212] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:37.223 [2024-11-19 23:54:11.494230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:37.223 [2024-11-19 23:54:11.494248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:37.223 [2024-11-19 23:54:11.494262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.494270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1597650) 00:29:37.223 [2024-11-19 23:54:11.494280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.223 [2024-11-19 23:54:11.494302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f2540, cid 4, qid 0 00:29:37.223 [2024-11-19 23:54:11.494427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:37.223 [2024-11-19 23:54:11.494439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:37.223 [2024-11-19 23:54:11.494446] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.494452] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1597650): datao=0, datal=4096, cccid=4 00:29:37.223 [2024-11-19 23:54:11.494460] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f2540) on tqpair(0x1597650): expected_datao=0, payload_size=4096 00:29:37.223 [2024-11-19 23:54:11.494467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.494477] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.494484] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.494495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.223 [2024-11-19 23:54:11.494505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.223 [2024-11-19 23:54:11.494512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.494518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f2540) on tqpair=0x1597650 00:29:37.223 [2024-11-19 23:54:11.494539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:37.223 [2024-11-19 23:54:11.494558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:37.223 [2024-11-19 23:54:11.494572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.494580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1597650) 00:29:37.223 [2024-11-19 23:54:11.494590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.223 [2024-11-19 23:54:11.494611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f2540, cid 4, qid 0 00:29:37.223 [2024-11-19 23:54:11.498082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:37.223 [2024-11-19 23:54:11.498099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:37.223 [2024-11-19 23:54:11.498106] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498113] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1597650): datao=0, datal=4096, cccid=4 00:29:37.223 [2024-11-19 23:54:11.498121] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f2540) on tqpair(0x1597650): expected_datao=0, payload_size=4096 00:29:37.223 [2024-11-19 23:54:11.498128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498138] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498146] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.223 [2024-11-19 23:54:11.498168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.223 [2024-11-19 23:54:11.498175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f2540) on tqpair=0x1597650 00:29:37.223 [2024-11-19 23:54:11.498195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:37.223 [2024-11-19 23:54:11.498212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:37.223 [2024-11-19 23:54:11.498228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:37.223 [2024-11-19 23:54:11.498240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:37.223 [2024-11-19 23:54:11.498249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:37.223 [2024-11-19 23:54:11.498257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:37.223 [2024-11-19 23:54:11.498266] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:37.223 [2024-11-19 23:54:11.498274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:37.223 [2024-11-19 23:54:11.498283] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:37.223 [2024-11-19 23:54:11.498303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1597650) 00:29:37.223 [2024-11-19 23:54:11.498323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.223 [2024-11-19 23:54:11.498334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1597650) 00:29:37.223 [2024-11-19 23:54:11.498356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.223 [2024-11-19 23:54:11.498397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f2540, cid 4, qid 0 00:29:37.223 [2024-11-19 23:54:11.498410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f26c0, cid 5, qid 0 00:29:37.223 [2024-11-19 23:54:11.498589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.223 [2024-11-19 23:54:11.498604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.223 [2024-11-19 23:54:11.498611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f2540) on tqpair=0x1597650 00:29:37.223 [2024-11-19 23:54:11.498628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.223 [2024-11-19 23:54:11.498637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.223 [2024-11-19 23:54:11.498644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f26c0) on tqpair=0x1597650 00:29:37.223 [2024-11-19 23:54:11.498666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1597650) 00:29:37.223 [2024-11-19 23:54:11.498686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.223 [2024-11-19 23:54:11.498712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f26c0, cid 5, qid 0 00:29:37.223 [2024-11-19 23:54:11.498798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.223 [2024-11-19 23:54:11.498813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.223 [2024-11-19 23:54:11.498820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f26c0) on tqpair=0x1597650 00:29:37.223 [2024-11-19 23:54:11.498841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1597650) 00:29:37.223 [2024-11-19 23:54:11.498860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.223 [2024-11-19 23:54:11.498880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f26c0, cid 5, qid 0 00:29:37.223 [2024-11-19 23:54:11.498961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.223 [2024-11-19 23:54:11.498975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.223 [2024-11-19 23:54:11.498981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.498988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f26c0) on tqpair=0x1597650 00:29:37.223 [2024-11-19 23:54:11.499004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.499013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1597650) 00:29:37.223 [2024-11-19 23:54:11.499023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.223 [2024-11-19 23:54:11.499043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f26c0, cid 5, qid 0 00:29:37.223 [2024-11-19 23:54:11.499132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.223 [2024-11-19 23:54:11.499147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.223 [2024-11-19 23:54:11.499154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.499161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f26c0) on tqpair=0x1597650 00:29:37.223 [2024-11-19 23:54:11.499186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.499197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1597650) 00:29:37.223 [2024-11-19 23:54:11.499207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.223 [2024-11-19 23:54:11.499220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.499227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1597650) 00:29:37.223 [2024-11-19 23:54:11.499236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.223 [2024-11-19 23:54:11.499247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.223 [2024-11-19 23:54:11.499255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1597650) 00:29:37.224 [2024-11-19 23:54:11.499263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.224 [2024-11-19 23:54:11.499275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1597650) 00:29:37.224 [2024-11-19 23:54:11.499291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.224 [2024-11-19 23:54:11.499317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f26c0, cid 5, qid 0 00:29:37.224 [2024-11-19 23:54:11.499329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f2540, cid 4, qid 0 00:29:37.224 [2024-11-19 23:54:11.499337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f2840, cid 6, qid 0 00:29:37.224 [2024-11-19 23:54:11.499345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f29c0, cid 7, qid 0 00:29:37.224 [2024-11-19 23:54:11.499547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:37.224 [2024-11-19 23:54:11.499561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:37.224 [2024-11-19 23:54:11.499568] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499574] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1597650): datao=0, datal=8192, cccid=5 00:29:37.224 [2024-11-19 23:54:11.499581] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f26c0) on tqpair(0x1597650): expected_datao=0, payload_size=8192 00:29:37.224 [2024-11-19 23:54:11.499589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499634] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499644] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:37.224 [2024-11-19 23:54:11.499662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:37.224 [2024-11-19 23:54:11.499669] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499675] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1597650): datao=0, datal=512, cccid=4 00:29:37.224 [2024-11-19 23:54:11.499682] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f2540) on tqpair(0x1597650): expected_datao=0, payload_size=512 00:29:37.224 [2024-11-19 23:54:11.499690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499699] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499706] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:37.224 [2024-11-19 23:54:11.499723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:37.224 [2024-11-19 23:54:11.499729] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499735] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1597650): datao=0, datal=512, cccid=6 00:29:37.224 [2024-11-19 23:54:11.499742] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f2840) on tqpair(0x1597650): expected_datao=0, payload_size=512 00:29:37.224 [2024-11-19 23:54:11.499749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499758] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499765] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:37.224 [2024-11-19 23:54:11.499781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:37.224 [2024-11-19 23:54:11.499788] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499794] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1597650): datao=0, datal=4096, cccid=7 00:29:37.224 [2024-11-19 23:54:11.499801] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f29c0) on tqpair(0x1597650): expected_datao=0, payload_size=4096 00:29:37.224 [2024-11-19 23:54:11.499808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499818] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499825] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.224 [2024-11-19 23:54:11.499846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.224 [2024-11-19 23:54:11.499856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f26c0) on tqpair=0x1597650 00:29:37.224 [2024-11-19 23:54:11.499883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.224 [2024-11-19 23:54:11.499895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.224 [2024-11-19 23:54:11.499901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f2540) on tqpair=0x1597650 00:29:37.224 [2024-11-19 23:54:11.499923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.224 [2024-11-19 23:54:11.499933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.224 [2024-11-19 23:54:11.499954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f2840) on tqpair=0x1597650 00:29:37.224 [2024-11-19 23:54:11.499971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.224 [2024-11-19 23:54:11.499980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.224 [2024-11-19 23:54:11.499987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.224 [2024-11-19 23:54:11.499993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f29c0) on tqpair=0x1597650 00:29:37.224 ===================================================== 00:29:37.224 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.224 ===================================================== 00:29:37.224 Controller Capabilities/Features 00:29:37.224 ================================ 00:29:37.224 Vendor ID: 8086 00:29:37.224 Subsystem Vendor ID: 8086 00:29:37.224 Serial Number: SPDK00000000000001 00:29:37.224 Model Number: SPDK bdev Controller 00:29:37.224 Firmware Version: 25.01 00:29:37.224 Recommended Arb Burst: 6 00:29:37.224 IEEE OUI Identifier: e4 d2 5c 00:29:37.224 Multi-path I/O 00:29:37.224 May have multiple subsystem ports: Yes 00:29:37.224 May have multiple controllers: Yes 00:29:37.224 Associated with SR-IOV VF: No 00:29:37.224 Max Data Transfer Size: 131072 00:29:37.224 Max Number of Namespaces: 32 00:29:37.224 Max Number of I/O Queues: 127 00:29:37.224 NVMe Specification Version (VS): 1.3 00:29:37.224 NVMe Specification Version (Identify): 1.3 00:29:37.224 Maximum Queue Entries: 128 00:29:37.224 Contiguous Queues Required: Yes 00:29:37.224 Arbitration Mechanisms Supported 00:29:37.224 Weighted Round Robin: Not Supported 00:29:37.224 Vendor Specific: Not Supported 00:29:37.224 Reset Timeout: 15000 ms 00:29:37.224 Doorbell Stride: 4 bytes 00:29:37.224 NVM Subsystem Reset: Not Supported 00:29:37.224 Command Sets Supported 00:29:37.224 NVM Command Set: Supported 00:29:37.224 Boot Partition: Not Supported 00:29:37.224 Memory Page Size Minimum: 4096 bytes 00:29:37.224 Memory Page Size Maximum: 4096 bytes 00:29:37.224 Persistent Memory Region: Not Supported 00:29:37.224 Optional Asynchronous Events Supported 00:29:37.224 Namespace Attribute Notices: Supported 00:29:37.224 Firmware Activation Notices: Not Supported 00:29:37.224 ANA Change Notices: Not Supported 00:29:37.224 PLE Aggregate Log Change Notices: Not Supported 00:29:37.224 LBA Status Info Alert Notices: Not Supported 00:29:37.224 EGE Aggregate Log Change Notices: Not Supported 00:29:37.224 Normal NVM Subsystem Shutdown event: Not Supported 00:29:37.224 Zone Descriptor Change Notices: Not Supported 00:29:37.224 Discovery Log Change Notices: Not Supported 00:29:37.224 Controller Attributes 00:29:37.224 128-bit Host Identifier: Supported 00:29:37.224 Non-Operational Permissive Mode: Not Supported 00:29:37.224 NVM Sets: Not Supported 00:29:37.224 Read Recovery Levels: Not Supported 00:29:37.224 Endurance Groups: Not Supported 00:29:37.224 Predictable Latency Mode: Not Supported 00:29:37.224 Traffic Based Keep ALive: Not Supported 00:29:37.224 Namespace Granularity: Not Supported 00:29:37.224 SQ Associations: Not Supported 00:29:37.224 UUID List: Not Supported 00:29:37.224 Multi-Domain Subsystem: Not Supported 00:29:37.224 Fixed Capacity Management: Not Supported 00:29:37.224 Variable Capacity Management: Not Supported 00:29:37.224 Delete Endurance Group: Not Supported 00:29:37.224 Delete NVM Set: Not Supported 00:29:37.224 Extended LBA Formats Supported: Not Supported 00:29:37.224 Flexible Data Placement Supported: Not Supported 00:29:37.224 00:29:37.224 Controller Memory Buffer Support 00:29:37.224 ================================ 00:29:37.224 Supported: No 00:29:37.224 00:29:37.224 Persistent Memory Region Support 00:29:37.224 ================================ 00:29:37.224 Supported: No 00:29:37.224 00:29:37.224 Admin Command Set Attributes 00:29:37.224 ============================ 00:29:37.224 Security Send/Receive: Not Supported 00:29:37.224 Format NVM: Not Supported 00:29:37.225 Firmware Activate/Download: Not Supported 00:29:37.225 Namespace Management: Not Supported 00:29:37.225 Device Self-Test: Not Supported 00:29:37.225 Directives: Not Supported 00:29:37.225 NVMe-MI: Not Supported 00:29:37.225 Virtualization Management: Not Supported 00:29:37.225 Doorbell Buffer Config: Not Supported 00:29:37.225 Get LBA Status Capability: Not Supported 00:29:37.225 Command & Feature Lockdown Capability: Not Supported 00:29:37.225 Abort Command Limit: 4 00:29:37.225 Async Event Request Limit: 4 00:29:37.225 Number of Firmware Slots: N/A 00:29:37.225 Firmware Slot 1 Read-Only: N/A 00:29:37.225 Firmware Activation Without Reset: N/A 00:29:37.225 Multiple Update Detection Support: N/A 00:29:37.225 Firmware Update Granularity: No Information Provided 00:29:37.225 Per-Namespace SMART Log: No 00:29:37.225 Asymmetric Namespace Access Log Page: Not Supported 00:29:37.225 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:37.225 Command Effects Log Page: Supported 00:29:37.225 Get Log Page Extended Data: Supported 00:29:37.225 Telemetry Log Pages: Not Supported 00:29:37.225 Persistent Event Log Pages: Not Supported 00:29:37.225 Supported Log Pages Log Page: May Support 00:29:37.225 Commands Supported & Effects Log Page: Not Supported 00:29:37.225 Feature Identifiers & Effects Log Page:May Support 00:29:37.225 NVMe-MI Commands & Effects Log Page: May Support 00:29:37.225 Data Area 4 for Telemetry Log: Not Supported 00:29:37.225 Error Log Page Entries Supported: 128 00:29:37.225 Keep Alive: Supported 00:29:37.225 Keep Alive Granularity: 10000 ms 00:29:37.225 00:29:37.225 NVM Command Set Attributes 00:29:37.225 ========================== 00:29:37.225 Submission Queue Entry Size 00:29:37.225 Max: 64 00:29:37.225 Min: 64 00:29:37.225 Completion Queue Entry Size 00:29:37.225 Max: 16 00:29:37.225 Min: 16 00:29:37.225 Number of Namespaces: 32 00:29:37.225 Compare Command: Supported 00:29:37.225 Write Uncorrectable Command: Not Supported 00:29:37.225 Dataset Management Command: Supported 00:29:37.225 Write Zeroes Command: Supported 00:29:37.225 Set Features Save Field: Not Supported 00:29:37.225 Reservations: Supported 00:29:37.225 Timestamp: Not Supported 00:29:37.225 Copy: Supported 00:29:37.225 Volatile Write Cache: Present 00:29:37.225 Atomic Write Unit (Normal): 1 00:29:37.225 Atomic Write Unit (PFail): 1 00:29:37.225 Atomic Compare & Write Unit: 1 00:29:37.225 Fused Compare & Write: Supported 00:29:37.225 Scatter-Gather List 00:29:37.225 SGL Command Set: Supported 00:29:37.225 SGL Keyed: Supported 00:29:37.225 SGL Bit Bucket Descriptor: Not Supported 00:29:37.225 SGL Metadata Pointer: Not Supported 00:29:37.225 Oversized SGL: Not Supported 00:29:37.225 SGL Metadata Address: Not Supported 00:29:37.225 SGL Offset: Supported 00:29:37.225 Transport SGL Data Block: Not Supported 00:29:37.225 Replay Protected Memory Block: Not Supported 00:29:37.225 00:29:37.225 Firmware Slot Information 00:29:37.225 ========================= 00:29:37.225 Active slot: 1 00:29:37.225 Slot 1 Firmware Revision: 25.01 00:29:37.225 00:29:37.225 00:29:37.225 Commands Supported and Effects 00:29:37.225 ============================== 00:29:37.225 Admin Commands 00:29:37.225 -------------- 00:29:37.225 Get Log Page (02h): Supported 00:29:37.225 Identify (06h): Supported 00:29:37.225 Abort (08h): Supported 00:29:37.225 Set Features (09h): Supported 00:29:37.225 Get Features (0Ah): Supported 00:29:37.225 Asynchronous Event Request (0Ch): Supported 00:29:37.225 Keep Alive (18h): Supported 00:29:37.225 I/O Commands 00:29:37.225 ------------ 00:29:37.225 Flush (00h): Supported LBA-Change 00:29:37.225 Write (01h): Supported LBA-Change 00:29:37.225 Read (02h): Supported 00:29:37.225 Compare (05h): Supported 00:29:37.225 Write Zeroes (08h): Supported LBA-Change 00:29:37.225 Dataset Management (09h): Supported LBA-Change 00:29:37.225 Copy (19h): Supported LBA-Change 00:29:37.225 00:29:37.225 Error Log 00:29:37.225 ========= 00:29:37.225 00:29:37.225 Arbitration 00:29:37.225 =========== 00:29:37.225 Arbitration Burst: 1 00:29:37.225 00:29:37.225 Power Management 00:29:37.225 ================ 00:29:37.225 Number of Power States: 1 00:29:37.225 Current Power State: Power State #0 00:29:37.225 Power State #0: 00:29:37.225 Max Power: 0.00 W 00:29:37.225 Non-Operational State: Operational 00:29:37.225 Entry Latency: Not Reported 00:29:37.225 Exit Latency: Not Reported 00:29:37.225 Relative Read Throughput: 0 00:29:37.225 Relative Read Latency: 0 00:29:37.225 Relative Write Throughput: 0 00:29:37.225 Relative Write Latency: 0 00:29:37.225 Idle Power: Not Reported 00:29:37.225 Active Power: Not Reported 00:29:37.225 Non-Operational Permissive Mode: Not Supported 00:29:37.225 00:29:37.225 Health Information 00:29:37.225 ================== 00:29:37.225 Critical Warnings: 00:29:37.225 Available Spare Space: OK 00:29:37.225 Temperature: OK 00:29:37.225 Device Reliability: OK 00:29:37.225 Read Only: No 00:29:37.225 Volatile Memory Backup: OK 00:29:37.225 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:37.225 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:37.225 Available Spare: 0% 00:29:37.225 Available Spare Threshold: 0% 00:29:37.225 Life Percentage Used:[2024-11-19 23:54:11.500137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.225 [2024-11-19 23:54:11.500150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1597650) 00:29:37.225 [2024-11-19 23:54:11.500161] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.225 [2024-11-19 23:54:11.500184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f29c0, cid 7, qid 0 00:29:37.225 [2024-11-19 23:54:11.500313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.225 [2024-11-19 23:54:11.500326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.225 [2024-11-19 23:54:11.500333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.225 [2024-11-19 23:54:11.500340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f29c0) on tqpair=0x1597650 00:29:37.225 [2024-11-19 23:54:11.500387] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:37.225 [2024-11-19 23:54:11.500407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f1f40) on tqpair=0x1597650 00:29:37.225 [2024-11-19 23:54:11.500418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.225 [2024-11-19 23:54:11.500427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f20c0) on tqpair=0x1597650 00:29:37.225 [2024-11-19 23:54:11.500434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.225 [2024-11-19 23:54:11.500442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f2240) on tqpair=0x1597650 00:29:37.225 [2024-11-19 23:54:11.500449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.225 [2024-11-19 23:54:11.500457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f23c0) on tqpair=0x1597650 00:29:37.225 [2024-11-19 23:54:11.500465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.225 [2024-11-19 23:54:11.500477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.225 [2024-11-19 23:54:11.500484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.225 [2024-11-19 23:54:11.500491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1597650) 00:29:37.225 [2024-11-19 23:54:11.500515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.225 [2024-11-19 23:54:11.500542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f23c0, cid 3, qid 0 00:29:37.225 [2024-11-19 23:54:11.500679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.225 [2024-11-19 23:54:11.500694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.225 [2024-11-19 23:54:11.500702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.225 [2024-11-19 23:54:11.500708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f23c0) on tqpair=0x1597650 00:29:37.225 [2024-11-19 23:54:11.500719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.225 [2024-11-19 23:54:11.500727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.225 [2024-11-19 23:54:11.500733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1597650) 00:29:37.225 [2024-11-19 23:54:11.500743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.225 [2024-11-19 23:54:11.500769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f23c0, cid 3, qid 0 00:29:37.225 [2024-11-19 23:54:11.500867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.225 [2024-11-19 23:54:11.500882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.225 [2024-11-19 23:54:11.500888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.225 [2024-11-19 23:54:11.500895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f23c0) on tqpair=0x1597650 00:29:37.225 [2024-11-19 23:54:11.500902] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:37.225 [2024-11-19 23:54:11.500910] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:37.225 [2024-11-19 23:54:11.500926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.500935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.500941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1597650) 00:29:37.226 [2024-11-19 23:54:11.500951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.226 [2024-11-19 23:54:11.500971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f23c0, cid 3, qid 0 00:29:37.226 [2024-11-19 23:54:11.501050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.226 [2024-11-19 23:54:11.501064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.226 [2024-11-19 23:54:11.501079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f23c0) on tqpair=0x1597650 00:29:37.226 [2024-11-19 23:54:11.501103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1597650) 00:29:37.226 [2024-11-19 23:54:11.501129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.226 [2024-11-19 23:54:11.501150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f23c0, cid 3, qid 0 00:29:37.226 [2024-11-19 23:54:11.501233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.226 [2024-11-19 23:54:11.501248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.226 [2024-11-19 23:54:11.501254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f23c0) on tqpair=0x1597650 00:29:37.226 [2024-11-19 23:54:11.501276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1597650) 00:29:37.226 [2024-11-19 23:54:11.501305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.226 [2024-11-19 23:54:11.501326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f23c0, cid 3, qid 0 00:29:37.226 [2024-11-19 23:54:11.501408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.226 [2024-11-19 23:54:11.501420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.226 [2024-11-19 23:54:11.501426] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f23c0) on tqpair=0x1597650 00:29:37.226 [2024-11-19 23:54:11.501448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1597650) 00:29:37.226 [2024-11-19 23:54:11.501474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.226 [2024-11-19 23:54:11.501493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f23c0, cid 3, qid 0 00:29:37.226 [2024-11-19 23:54:11.501579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.226 [2024-11-19 23:54:11.501591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.226 [2024-11-19 23:54:11.501598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f23c0) on tqpair=0x1597650 00:29:37.226 [2024-11-19 23:54:11.501619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1597650) 00:29:37.226 [2024-11-19 23:54:11.501645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.226 [2024-11-19 23:54:11.501664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f23c0, cid 3, qid 0 00:29:37.226 [2024-11-19 23:54:11.501756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.226 [2024-11-19 23:54:11.501768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.226 [2024-11-19 23:54:11.501774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f23c0) on tqpair=0x1597650 00:29:37.226 [2024-11-19 23:54:11.501796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.501812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1597650) 00:29:37.226 [2024-11-19 23:54:11.501822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.226 [2024-11-19 23:54:11.501841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f23c0, cid 3, qid 0 00:29:37.226 [2024-11-19 23:54:11.505082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.226 [2024-11-19 23:54:11.505099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.226 [2024-11-19 23:54:11.505107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.505114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f23c0) on tqpair=0x1597650 00:29:37.226 [2024-11-19 23:54:11.505132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.505142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.505149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1597650) 00:29:37.226 [2024-11-19 23:54:11.505164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.226 [2024-11-19 23:54:11.505187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f23c0, cid 3, qid 0 00:29:37.226 [2024-11-19 23:54:11.505311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:37.226 [2024-11-19 23:54:11.505325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:37.226 [2024-11-19 23:54:11.505332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:37.226 [2024-11-19 23:54:11.505339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15f23c0) on tqpair=0x1597650 00:29:37.226 [2024-11-19 23:54:11.505352] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:29:37.226 0% 00:29:37.226 Data Units Read: 0 00:29:37.226 Data Units Written: 0 00:29:37.226 Host Read Commands: 0 00:29:37.226 Host Write Commands: 0 00:29:37.226 Controller Busy Time: 0 minutes 00:29:37.226 Power Cycles: 0 00:29:37.226 Power On Hours: 0 hours 00:29:37.226 Unsafe Shutdowns: 0 00:29:37.226 Unrecoverable Media Errors: 0 00:29:37.226 Lifetime Error Log Entries: 0 00:29:37.226 Warning Temperature Time: 0 minutes 00:29:37.226 Critical Temperature Time: 0 minutes 00:29:37.226 00:29:37.226 Number of Queues 00:29:37.226 ================ 00:29:37.226 Number of I/O Submission Queues: 127 00:29:37.226 Number of I/O Completion Queues: 127 00:29:37.226 00:29:37.226 Active Namespaces 00:29:37.226 ================= 00:29:37.226 Namespace ID:1 00:29:37.226 Error Recovery Timeout: Unlimited 00:29:37.226 Command Set Identifier: NVM (00h) 00:29:37.226 Deallocate: Supported 00:29:37.226 Deallocated/Unwritten Error: Not Supported 00:29:37.226 Deallocated Read Value: Unknown 00:29:37.226 Deallocate in Write Zeroes: Not Supported 00:29:37.226 Deallocated Guard Field: 0xFFFF 00:29:37.226 Flush: Supported 00:29:37.226 Reservation: Supported 00:29:37.226 Namespace Sharing Capabilities: Multiple Controllers 00:29:37.226 Size (in LBAs): 131072 (0GiB) 00:29:37.226 Capacity (in LBAs): 131072 (0GiB) 00:29:37.226 Utilization (in LBAs): 131072 (0GiB) 00:29:37.226 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:37.226 EUI64: ABCDEF0123456789 00:29:37.226 UUID: 9929ac37-c72a-4671-b80b-be27670a156d 00:29:37.226 Thin Provisioning: Not Supported 00:29:37.226 Per-NS Atomic Units: Yes 00:29:37.226 Atomic Boundary Size (Normal): 0 00:29:37.226 Atomic Boundary Size (PFail): 0 00:29:37.226 Atomic Boundary Offset: 0 00:29:37.226 Maximum Single Source Range Length: 65535 00:29:37.226 Maximum Copy Length: 65535 00:29:37.226 Maximum Source Range Count: 1 00:29:37.226 NGUID/EUI64 Never Reused: No 00:29:37.226 Namespace Write Protected: No 00:29:37.227 Number of LBA Formats: 1 00:29:37.227 Current LBA Format: LBA Format #00 00:29:37.227 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:37.227 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:37.484 rmmod nvme_tcp 00:29:37.484 rmmod nvme_fabrics 00:29:37.484 rmmod nvme_keyring 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 278293 ']' 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 278293 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 278293 ']' 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 278293 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278293 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278293' 00:29:37.484 killing process with pid 278293 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 278293 00:29:37.484 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 278293 00:29:37.741 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:37.742 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:37.742 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:37.742 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:37.742 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:37.742 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:37.742 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:37.742 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:37.742 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:37.742 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.742 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.742 23:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.647 23:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.647 00:29:39.647 real 0m5.461s 00:29:39.647 user 0m4.535s 00:29:39.647 sys 0m1.926s 00:29:39.647 23:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.647 23:54:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:39.647 ************************************ 00:29:39.647 END TEST nvmf_identify 00:29:39.647 ************************************ 00:29:39.906 23:54:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:39.906 23:54:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:39.906 23:54:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.906 23:54:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.906 ************************************ 00:29:39.906 START TEST nvmf_perf 00:29:39.906 ************************************ 00:29:39.906 23:54:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:39.906 * Looking for test storage... 00:29:39.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.906 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:39.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.907 --rc genhtml_branch_coverage=1 00:29:39.907 --rc genhtml_function_coverage=1 00:29:39.907 --rc genhtml_legend=1 00:29:39.907 --rc geninfo_all_blocks=1 00:29:39.907 --rc geninfo_unexecuted_blocks=1 00:29:39.907 00:29:39.907 ' 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:39.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.907 --rc genhtml_branch_coverage=1 00:29:39.907 --rc genhtml_function_coverage=1 00:29:39.907 --rc genhtml_legend=1 00:29:39.907 --rc geninfo_all_blocks=1 00:29:39.907 --rc geninfo_unexecuted_blocks=1 00:29:39.907 00:29:39.907 ' 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:39.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.907 --rc genhtml_branch_coverage=1 00:29:39.907 --rc genhtml_function_coverage=1 00:29:39.907 --rc genhtml_legend=1 00:29:39.907 --rc geninfo_all_blocks=1 00:29:39.907 --rc geninfo_unexecuted_blocks=1 00:29:39.907 00:29:39.907 ' 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:39.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.907 --rc genhtml_branch_coverage=1 00:29:39.907 --rc genhtml_function_coverage=1 00:29:39.907 --rc genhtml_legend=1 00:29:39.907 --rc geninfo_all_blocks=1 00:29:39.907 --rc geninfo_unexecuted_blocks=1 00:29:39.907 00:29:39.907 ' 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:39.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.907 23:54:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:41.808 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.808 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:41.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:41.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:41.809 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.809 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:42.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:29:42.068 00:29:42.068 --- 10.0.0.2 ping statistics --- 00:29:42.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.068 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:29:42.068 00:29:42.068 --- 10.0.0.1 ping statistics --- 00:29:42.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.068 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=280373 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 280373 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 280373 ']' 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.068 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:42.068 [2024-11-19 23:54:16.308546] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:29:42.068 [2024-11-19 23:54:16.308650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.326 [2024-11-19 23:54:16.383030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:42.326 [2024-11-19 23:54:16.432847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.326 [2024-11-19 23:54:16.432908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.326 [2024-11-19 23:54:16.432928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.326 [2024-11-19 23:54:16.432946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.326 [2024-11-19 23:54:16.432971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.326 [2024-11-19 23:54:16.434636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.326 [2024-11-19 23:54:16.434701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.326 [2024-11-19 23:54:16.434767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.326 [2024-11-19 23:54:16.434770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.326 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.326 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:42.326 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:42.326 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:42.326 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:42.326 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.326 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:42.326 23:54:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:45.606 23:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:45.606 23:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:45.864 23:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:45.864 23:54:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:46.123 23:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:46.123 23:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:46.123 23:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:46.123 23:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:46.123 23:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:46.381 [2024-11-19 23:54:20.558823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.381 23:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:46.638 23:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:46.638 23:54:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:46.896 23:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:46.896 23:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:47.154 23:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.412 [2024-11-19 23:54:21.654800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.412 23:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:47.670 23:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:47.670 23:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:47.670 23:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:47.670 23:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:49.044 Initializing NVMe Controllers 00:29:49.044 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:49.044 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:49.044 Initialization complete. Launching workers. 00:29:49.044 ======================================================== 00:29:49.044 Latency(us) 00:29:49.044 Device Information : IOPS MiB/s Average min max 00:29:49.044 PCIE (0000:88:00.0) NSID 1 from core 0: 86332.58 337.24 370.12 39.78 5259.62 00:29:49.044 ======================================================== 00:29:49.044 Total : 86332.58 337.24 370.12 39.78 5259.62 00:29:49.044 00:29:49.044 23:54:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:50.416 Initializing NVMe Controllers 00:29:50.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:50.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:50.416 Initialization complete. Launching workers. 00:29:50.416 ======================================================== 00:29:50.416 Latency(us) 00:29:50.416 Device Information : IOPS MiB/s Average min max 00:29:50.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 273.00 1.07 3770.46 140.55 45773.24 00:29:50.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.00 0.21 19272.05 7938.27 47909.69 00:29:50.416 ======================================================== 00:29:50.416 Total : 326.00 1.27 6290.66 140.55 47909.69 00:29:50.416 00:29:50.416 23:54:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:51.789 Initializing NVMe Controllers 00:29:51.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:51.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:51.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:51.789 Initialization complete. Launching workers. 00:29:51.789 ======================================================== 00:29:51.789 Latency(us) 00:29:51.789 Device Information : IOPS MiB/s Average min max 00:29:51.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8544.70 33.38 3746.12 626.22 8160.41 00:29:51.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3868.38 15.11 8300.13 5846.47 15819.35 00:29:51.789 ======================================================== 00:29:51.789 Total : 12413.08 48.49 5165.32 626.22 15819.35 00:29:51.789 00:29:51.789 23:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:51.789 23:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:51.789 23:54:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:54.318 Initializing NVMe Controllers 00:29:54.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.318 Controller IO queue size 128, less than required. 00:29:54.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:54.318 Controller IO queue size 128, less than required. 00:29:54.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:54.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:54.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:54.318 Initialization complete. Launching workers. 00:29:54.318 ======================================================== 00:29:54.318 Latency(us) 00:29:54.318 Device Information : IOPS MiB/s Average min max 00:29:54.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1709.54 427.39 76223.67 48953.75 135244.74 00:29:54.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.51 143.63 239152.20 78164.92 359227.91 00:29:54.318 ======================================================== 00:29:54.318 Total : 2284.05 571.01 117205.04 48953.75 359227.91 00:29:54.318 00:29:54.318 23:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:54.576 No valid NVMe controllers or AIO or URING devices found 00:29:54.576 Initializing NVMe Controllers 00:29:54.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.576 Controller IO queue size 128, less than required. 00:29:54.576 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:54.576 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:54.576 Controller IO queue size 128, less than required. 00:29:54.576 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:54.576 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:54.576 WARNING: Some requested NVMe devices were skipped 00:29:54.576 23:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:57.104 Initializing NVMe Controllers 00:29:57.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.104 Controller IO queue size 128, less than required. 00:29:57.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.104 Controller IO queue size 128, less than required. 00:29:57.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:57.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:57.104 Initialization complete. Launching workers. 00:29:57.104 00:29:57.104 ==================== 00:29:57.104 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:57.104 TCP transport: 00:29:57.104 polls: 9843 00:29:57.104 idle_polls: 6773 00:29:57.104 sock_completions: 3070 00:29:57.104 nvme_completions: 5855 00:29:57.104 submitted_requests: 8848 00:29:57.104 queued_requests: 1 00:29:57.104 00:29:57.104 ==================== 00:29:57.104 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:57.104 TCP transport: 00:29:57.104 polls: 12607 00:29:57.104 idle_polls: 9089 00:29:57.104 sock_completions: 3518 00:29:57.104 nvme_completions: 6467 00:29:57.104 submitted_requests: 9766 00:29:57.104 queued_requests: 1 00:29:57.104 ======================================================== 00:29:57.104 Latency(us) 00:29:57.104 Device Information : IOPS MiB/s Average min max 00:29:57.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1463.43 365.86 89970.54 56138.09 157957.09 00:29:57.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1616.42 404.11 79695.96 43772.08 109073.40 00:29:57.104 ======================================================== 00:29:57.104 Total : 3079.86 769.96 84578.05 43772.08 157957.09 00:29:57.104 00:29:57.104 23:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:57.104 23:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:57.362 23:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:57.362 23:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:57.362 23:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:00.641 23:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=fd764adf-687b-47a8-9d7d-98d3125945bc 00:30:00.641 23:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb fd764adf-687b-47a8-9d7d-98d3125945bc 00:30:00.641 23:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=fd764adf-687b-47a8-9d7d-98d3125945bc 00:30:00.641 23:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:00.641 23:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:00.641 23:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:00.641 23:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:00.898 23:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:00.898 { 00:30:00.898 "uuid": "fd764adf-687b-47a8-9d7d-98d3125945bc", 00:30:00.898 "name": "lvs_0", 00:30:00.898 "base_bdev": "Nvme0n1", 00:30:00.898 "total_data_clusters": 238234, 00:30:00.898 "free_clusters": 238234, 00:30:00.898 "block_size": 512, 00:30:00.898 "cluster_size": 4194304 00:30:00.898 } 00:30:00.898 ]' 00:30:00.898 23:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="fd764adf-687b-47a8-9d7d-98d3125945bc") .free_clusters' 00:30:01.156 23:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:01.156 23:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="fd764adf-687b-47a8-9d7d-98d3125945bc") .cluster_size' 00:30:01.156 23:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:01.156 23:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:01.156 23:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:01.156 952936 00:30:01.156 23:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:01.156 23:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:01.156 23:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fd764adf-687b-47a8-9d7d-98d3125945bc lbd_0 20480 00:30:01.414 23:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=a42a877a-a278-405f-8aee-4a869bbdcd9d 00:30:01.414 23:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore a42a877a-a278-405f-8aee-4a869bbdcd9d lvs_n_0 00:30:02.346 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=0feb751e-8e9f-4694-9c8b-9a28e595cadf 00:30:02.346 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 0feb751e-8e9f-4694-9c8b-9a28e595cadf 00:30:02.346 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=0feb751e-8e9f-4694-9c8b-9a28e595cadf 00:30:02.346 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:02.346 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:02.346 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:02.346 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:02.604 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:02.604 { 00:30:02.604 "uuid": "fd764adf-687b-47a8-9d7d-98d3125945bc", 00:30:02.604 "name": "lvs_0", 00:30:02.604 "base_bdev": "Nvme0n1", 00:30:02.604 "total_data_clusters": 238234, 00:30:02.604 "free_clusters": 233114, 00:30:02.604 "block_size": 512, 00:30:02.604 "cluster_size": 4194304 00:30:02.604 }, 00:30:02.604 { 00:30:02.604 "uuid": "0feb751e-8e9f-4694-9c8b-9a28e595cadf", 00:30:02.604 "name": "lvs_n_0", 00:30:02.604 "base_bdev": "a42a877a-a278-405f-8aee-4a869bbdcd9d", 00:30:02.604 "total_data_clusters": 5114, 00:30:02.604 "free_clusters": 5114, 00:30:02.604 "block_size": 512, 00:30:02.604 "cluster_size": 4194304 00:30:02.604 } 00:30:02.604 ]' 00:30:02.604 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="0feb751e-8e9f-4694-9c8b-9a28e595cadf") .free_clusters' 00:30:02.604 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:02.604 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="0feb751e-8e9f-4694-9c8b-9a28e595cadf") .cluster_size' 00:30:02.604 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:02.604 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:02.604 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:02.604 20456 00:30:02.604 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:02.604 23:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0feb751e-8e9f-4694-9c8b-9a28e595cadf lbd_nest_0 20456 00:30:02.862 23:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=4f88618c-30fc-49f1-b20f-7f66bbea8528 00:30:02.862 23:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:03.120 23:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:03.120 23:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 4f88618c-30fc-49f1-b20f-7f66bbea8528 00:30:03.377 23:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.635 23:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:03.635 23:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:03.635 23:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:03.635 23:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:03.635 23:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.831 Initializing NVMe Controllers 00:30:15.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:15.831 Initialization complete. Launching workers. 00:30:15.831 ======================================================== 00:30:15.831 Latency(us) 00:30:15.831 Device Information : IOPS MiB/s Average min max 00:30:15.831 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 43.60 0.02 23007.29 174.24 45826.80 00:30:15.831 ======================================================== 00:30:15.831 Total : 43.60 0.02 23007.29 174.24 45826.80 00:30:15.831 00:30:15.831 23:54:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:15.831 23:54:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:25.851 Initializing NVMe Controllers 00:30:25.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:25.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:25.851 Initialization complete. Launching workers. 00:30:25.851 ======================================================== 00:30:25.851 Latency(us) 00:30:25.851 Device Information : IOPS MiB/s Average min max 00:30:25.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.09 9.76 12866.75 5733.95 49151.90 00:30:25.851 ======================================================== 00:30:25.851 Total : 78.09 9.76 12866.75 5733.95 49151.90 00:30:25.851 00:30:25.851 23:54:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:25.851 23:54:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:25.851 23:54:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.818 Initializing NVMe Controllers 00:30:35.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:35.818 Initialization complete. Launching workers. 00:30:35.818 ======================================================== 00:30:35.818 Latency(us) 00:30:35.818 Device Information : IOPS MiB/s Average min max 00:30:35.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7090.34 3.46 4512.92 355.10 11982.10 00:30:35.818 ======================================================== 00:30:35.818 Total : 7090.34 3.46 4512.92 355.10 11982.10 00:30:35.818 00:30:35.818 23:55:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:35.818 23:55:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:45.785 Initializing NVMe Controllers 00:30:45.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:45.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:45.785 Initialization complete. Launching workers. 00:30:45.785 ======================================================== 00:30:45.785 Latency(us) 00:30:45.785 Device Information : IOPS MiB/s Average min max 00:30:45.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3959.83 494.98 8081.08 734.02 18142.51 00:30:45.785 ======================================================== 00:30:45.785 Total : 3959.83 494.98 8081.08 734.02 18142.51 00:30:45.785 00:30:45.785 23:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:45.785 23:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:45.785 23:55:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:55.748 Initializing NVMe Controllers 00:30:55.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:55.748 Controller IO queue size 128, less than required. 00:30:55.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:55.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:55.748 Initialization complete. Launching workers. 00:30:55.748 ======================================================== 00:30:55.748 Latency(us) 00:30:55.748 Device Information : IOPS MiB/s Average min max 00:30:55.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11728.15 5.73 10920.12 1739.29 30233.53 00:30:55.748 ======================================================== 00:30:55.748 Total : 11728.15 5.73 10920.12 1739.29 30233.53 00:30:55.748 00:30:55.748 23:55:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:55.748 23:55:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:07.946 Initializing NVMe Controllers 00:31:07.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.946 Controller IO queue size 128, less than required. 00:31:07.946 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:07.946 Initialization complete. Launching workers. 00:31:07.946 ======================================================== 00:31:07.946 Latency(us) 00:31:07.946 Device Information : IOPS MiB/s Average min max 00:31:07.946 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1198.20 149.77 107428.51 18829.69 236811.53 00:31:07.946 ======================================================== 00:31:07.946 Total : 1198.20 149.77 107428.51 18829.69 236811.53 00:31:07.946 00:31:07.946 23:55:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.946 23:55:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4f88618c-30fc-49f1-b20f-7f66bbea8528 00:31:07.946 23:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:07.946 23:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a42a877a-a278-405f-8aee-4a869bbdcd9d 00:31:07.946 23:55:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.946 rmmod nvme_tcp 00:31:07.946 rmmod nvme_fabrics 00:31:07.946 rmmod nvme_keyring 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 280373 ']' 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 280373 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 280373 ']' 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 280373 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280373 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280373' 00:31:07.946 killing process with pid 280373 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 280373 00:31:07.946 23:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 280373 00:31:09.853 23:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:09.853 23:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:09.853 23:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:09.853 23:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:09.853 23:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:09.853 23:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:09.853 23:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:09.853 23:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:09.853 23:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:09.853 23:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.853 23:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.853 23:55:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:11.763 00:31:11.763 real 1m31.818s 00:31:11.763 user 5m37.644s 00:31:11.763 sys 0m16.727s 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:11.763 ************************************ 00:31:11.763 END TEST nvmf_perf 00:31:11.763 ************************************ 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.763 ************************************ 00:31:11.763 START TEST nvmf_fio_host 00:31:11.763 ************************************ 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:11.763 * Looking for test storage... 00:31:11.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:11.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.763 --rc genhtml_branch_coverage=1 00:31:11.763 --rc genhtml_function_coverage=1 00:31:11.763 --rc genhtml_legend=1 00:31:11.763 --rc geninfo_all_blocks=1 00:31:11.763 --rc geninfo_unexecuted_blocks=1 00:31:11.763 00:31:11.763 ' 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:11.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.763 --rc genhtml_branch_coverage=1 00:31:11.763 --rc genhtml_function_coverage=1 00:31:11.763 --rc genhtml_legend=1 00:31:11.763 --rc geninfo_all_blocks=1 00:31:11.763 --rc geninfo_unexecuted_blocks=1 00:31:11.763 00:31:11.763 ' 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:11.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.763 --rc genhtml_branch_coverage=1 00:31:11.763 --rc genhtml_function_coverage=1 00:31:11.763 --rc genhtml_legend=1 00:31:11.763 --rc geninfo_all_blocks=1 00:31:11.763 --rc geninfo_unexecuted_blocks=1 00:31:11.763 00:31:11.763 ' 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:11.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.763 --rc genhtml_branch_coverage=1 00:31:11.763 --rc genhtml_function_coverage=1 00:31:11.763 --rc genhtml_legend=1 00:31:11.763 --rc geninfo_all_blocks=1 00:31:11.763 --rc geninfo_unexecuted_blocks=1 00:31:11.763 00:31:11.763 ' 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.763 23:55:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:11.763 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.763 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.763 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.763 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.763 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:11.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:11.764 23:55:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.297 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:14.298 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:14.298 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:14.298 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:14.298 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:14.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:31:14.298 00:31:14.298 --- 10.0.0.2 ping statistics --- 00:31:14.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.298 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:31:14.298 00:31:14.298 --- 10.0.0.1 ping statistics --- 00:31:14.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.298 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=292486 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 292486 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 292486 ']' 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.298 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.298 [2024-11-19 23:55:48.222623] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:31:14.298 [2024-11-19 23:55:48.222690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.298 [2024-11-19 23:55:48.299127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:14.298 [2024-11-19 23:55:48.348931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.298 [2024-11-19 23:55:48.348995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.298 [2024-11-19 23:55:48.349020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.298 [2024-11-19 23:55:48.349033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.299 [2024-11-19 23:55:48.349045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.299 [2024-11-19 23:55:48.350768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.299 [2024-11-19 23:55:48.350839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:14.299 [2024-11-19 23:55:48.350883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:14.299 [2024-11-19 23:55:48.350886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.299 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.299 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:14.299 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:14.556 [2024-11-19 23:55:48.703280] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.556 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:14.556 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:14.556 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.556 23:55:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:14.815 Malloc1 00:31:14.815 23:55:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:15.072 23:55:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:15.329 23:55:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.587 [2024-11-19 23:55:49.849993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.587 23:55:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:15.844 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:15.844 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:15.845 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:15.845 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:15.845 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:15.845 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:15.845 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:15.845 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:15.845 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:15.845 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:15.845 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:15.845 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:15.845 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:16.102 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:16.102 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:16.102 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.102 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:16.102 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:16.102 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:16.102 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:16.103 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:16.103 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:16.103 23:55:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:16.103 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:16.103 fio-3.35 00:31:16.103 Starting 1 thread 00:31:18.634 00:31:18.635 test: (groupid=0, jobs=1): err= 0: pid=292842: Tue Nov 19 23:55:52 2024 00:31:18.635 read: IOPS=8966, BW=35.0MiB/s (36.7MB/s)(70.3MiB/2007msec) 00:31:18.635 slat (nsec): min=1974, max=159450, avg=2448.60, stdev=1866.29 00:31:18.635 clat (usec): min=2484, max=13299, avg=7826.78, stdev=641.03 00:31:18.635 lat (usec): min=2514, max=13301, avg=7829.23, stdev=640.92 00:31:18.635 clat percentiles (usec): 00:31:18.635 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:31:18.635 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:31:18.635 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:31:18.635 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11994], 99.95th=[12387], 00:31:18.635 | 99.99th=[13304] 00:31:18.635 bw ( KiB/s): min=35072, max=36264, per=99.98%, avg=35858.00, stdev=546.60, samples=4 00:31:18.635 iops : min= 8768, max= 9066, avg=8964.50, stdev=136.65, samples=4 00:31:18.635 write: IOPS=8988, BW=35.1MiB/s (36.8MB/s)(70.5MiB/2007msec); 0 zone resets 00:31:18.635 slat (usec): min=2, max=131, avg= 2.57, stdev= 1.40 00:31:18.635 clat (usec): min=1471, max=12325, avg=6381.31, stdev=538.28 00:31:18.635 lat (usec): min=1480, max=12327, avg=6383.88, stdev=538.22 00:31:18.635 clat percentiles (usec): 00:31:18.635 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:31:18.635 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:31:18.635 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:31:18.635 | 99.00th=[ 7504], 99.50th=[ 7635], 99.90th=[10945], 99.95th=[12125], 00:31:18.635 | 99.99th=[12256] 00:31:18.635 bw ( KiB/s): min=35624, max=36224, per=100.00%, avg=35956.00, stdev=270.15, samples=4 00:31:18.635 iops : min= 8908, max= 9056, avg=8989.00, stdev=66.32, samples=4 00:31:18.635 lat (msec) : 2=0.02%, 4=0.10%, 10=99.73%, 20=0.15% 00:31:18.635 cpu : usr=68.44%, sys=29.51%, ctx=80, majf=0, minf=41 00:31:18.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:18.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:18.635 issued rwts: total=17996,18039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:18.635 00:31:18.635 Run status group 0 (all jobs): 00:31:18.635 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.3MiB (73.7MB), run=2007-2007msec 00:31:18.635 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.5MiB (73.9MB), run=2007-2007msec 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:18.635 23:55:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:18.893 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:18.893 fio-3.35 00:31:18.893 Starting 1 thread 00:31:21.421 00:31:21.421 test: (groupid=0, jobs=1): err= 0: pid=293176: Tue Nov 19 23:55:55 2024 00:31:21.421 read: IOPS=8396, BW=131MiB/s (138MB/s)(264MiB/2010msec) 00:31:21.421 slat (nsec): min=2851, max=94123, avg=3683.00, stdev=1589.83 00:31:21.421 clat (usec): min=1856, max=16663, avg=8803.67, stdev=1987.44 00:31:21.421 lat (usec): min=1860, max=16666, avg=8807.35, stdev=1987.46 00:31:21.421 clat percentiles (usec): 00:31:21.421 | 1.00th=[ 4555], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7177], 00:31:21.421 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9241], 00:31:21.421 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11207], 95.00th=[11994], 00:31:21.421 | 99.00th=[13829], 99.50th=[14877], 99.90th=[15926], 99.95th=[16319], 00:31:21.421 | 99.99th=[16581] 00:31:21.421 bw ( KiB/s): min=62432, max=73408, per=50.87%, avg=68336.00, stdev=5845.67, samples=4 00:31:21.421 iops : min= 3902, max= 4588, avg=4271.00, stdev=365.35, samples=4 00:31:21.421 write: IOPS=4863, BW=76.0MiB/s (79.7MB/s)(140MiB/1841msec); 0 zone resets 00:31:21.421 slat (usec): min=30, max=163, avg=33.20, stdev= 4.96 00:31:21.421 clat (usec): min=6058, max=19422, avg=11491.44, stdev=1983.22 00:31:21.421 lat (usec): min=6091, max=19454, avg=11524.64, stdev=1983.13 00:31:21.421 clat percentiles (usec): 00:31:21.421 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9765], 00:31:21.421 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:31:21.421 | 70.00th=[12387], 80.00th=[13042], 90.00th=[14091], 95.00th=[15139], 00:31:21.421 | 99.00th=[16581], 99.50th=[16909], 99.90th=[19006], 99.95th=[19006], 00:31:21.421 | 99.99th=[19530] 00:31:21.421 bw ( KiB/s): min=64480, max=76992, per=91.26%, avg=71016.00, stdev=6811.18, samples=4 00:31:21.421 iops : min= 4030, max= 4812, avg=4438.50, stdev=425.70, samples=4 00:31:21.421 lat (msec) : 2=0.01%, 4=0.33%, 10=54.57%, 20=45.09% 00:31:21.421 cpu : usr=77.80%, sys=20.61%, ctx=49, majf=0, minf=61 00:31:21.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:21.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:21.421 issued rwts: total=16876,8954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:21.421 00:31:21.421 Run status group 0 (all jobs): 00:31:21.421 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=264MiB (276MB), run=2010-2010msec 00:31:21.421 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=140MiB (147MB), run=1841-1841msec 00:31:21.421 23:55:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:21.421 23:55:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:21.421 23:55:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:21.421 23:55:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:21.421 23:55:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:21.421 23:55:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:21.421 23:55:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:21.421 23:55:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:21.421 23:55:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:21.421 23:55:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:21.421 23:55:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:21.421 23:55:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:24.701 Nvme0n1 00:31:24.701 23:55:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=ecf4a6af-c8db-4cb2-b5f0-451a88de3925 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb ecf4a6af-c8db-4cb2-b5f0-451a88de3925 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=ecf4a6af-c8db-4cb2-b5f0-451a88de3925 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:27.977 { 00:31:27.977 "uuid": "ecf4a6af-c8db-4cb2-b5f0-451a88de3925", 00:31:27.977 "name": "lvs_0", 00:31:27.977 "base_bdev": "Nvme0n1", 00:31:27.977 "total_data_clusters": 930, 00:31:27.977 "free_clusters": 930, 00:31:27.977 "block_size": 512, 00:31:27.977 "cluster_size": 1073741824 00:31:27.977 } 00:31:27.977 ]' 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="ecf4a6af-c8db-4cb2-b5f0-451a88de3925") .free_clusters' 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="ecf4a6af-c8db-4cb2-b5f0-451a88de3925") .cluster_size' 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:27.977 952320 00:31:27.977 23:56:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:28.236 14f3282a-6279-469a-bd78-9e01b56b2c9f 00:31:28.236 23:56:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:28.528 23:56:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:28.810 23:56:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:29.068 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:29.069 23:56:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:29.327 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:29.327 fio-3.35 00:31:29.327 Starting 1 thread 00:31:31.854 00:31:31.854 test: (groupid=0, jobs=1): err= 0: pid=294576: Tue Nov 19 23:56:05 2024 00:31:31.854 read: IOPS=6037, BW=23.6MiB/s (24.7MB/s)(47.4MiB/2009msec) 00:31:31.854 slat (nsec): min=1830, max=128975, avg=2339.52, stdev=1796.69 00:31:31.854 clat (usec): min=1039, max=171202, avg=11629.26, stdev=11587.86 00:31:31.854 lat (usec): min=1042, max=171244, avg=11631.60, stdev=11588.11 00:31:31.854 clat percentiles (msec): 00:31:31.854 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:31:31.854 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:31:31.854 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:31:31.854 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:31.854 | 99.99th=[ 171] 00:31:31.854 bw ( KiB/s): min=16976, max=26704, per=99.86%, avg=24118.00, stdev=4763.56, samples=4 00:31:31.854 iops : min= 4244, max= 6676, avg=6029.50, stdev=1190.89, samples=4 00:31:31.854 write: IOPS=6019, BW=23.5MiB/s (24.7MB/s)(47.2MiB/2009msec); 0 zone resets 00:31:31.854 slat (nsec): min=1932, max=113844, avg=2434.41, stdev=1401.78 00:31:31.854 clat (usec): min=335, max=169397, avg=9479.20, stdev=10890.91 00:31:31.854 lat (usec): min=339, max=169403, avg=9481.63, stdev=10891.17 00:31:31.854 clat percentiles (msec): 00:31:31.854 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:31:31.854 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:31:31.854 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:31:31.854 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:31:31.854 | 99.99th=[ 169] 00:31:31.854 bw ( KiB/s): min=18024, max=26248, per=99.98%, avg=24076.00, stdev=4037.18, samples=4 00:31:31.854 iops : min= 4506, max= 6562, avg=6019.00, stdev=1009.30, samples=4 00:31:31.854 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:31.854 lat (msec) : 2=0.03%, 4=0.13%, 10=57.15%, 20=42.14%, 250=0.53% 00:31:31.854 cpu : usr=61.60%, sys=36.80%, ctx=91, majf=0, minf=41 00:31:31.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:31.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:31.854 issued rwts: total=12130,12094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:31.854 00:31:31.854 Run status group 0 (all jobs): 00:31:31.854 READ: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=47.4MiB (49.7MB), run=2009-2009msec 00:31:31.854 WRITE: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=47.2MiB (49.5MB), run=2009-2009msec 00:31:31.854 23:56:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:32.112 23:56:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=0419740f-418f-4e79-b49f-46203036403c 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 0419740f-418f-4e79-b49f-46203036403c 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=0419740f-418f-4e79-b49f-46203036403c 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:33.484 { 00:31:33.484 "uuid": "ecf4a6af-c8db-4cb2-b5f0-451a88de3925", 00:31:33.484 "name": "lvs_0", 00:31:33.484 "base_bdev": "Nvme0n1", 00:31:33.484 "total_data_clusters": 930, 00:31:33.484 "free_clusters": 0, 00:31:33.484 "block_size": 512, 00:31:33.484 "cluster_size": 1073741824 00:31:33.484 }, 00:31:33.484 { 00:31:33.484 "uuid": "0419740f-418f-4e79-b49f-46203036403c", 00:31:33.484 "name": "lvs_n_0", 00:31:33.484 "base_bdev": "14f3282a-6279-469a-bd78-9e01b56b2c9f", 00:31:33.484 "total_data_clusters": 237847, 00:31:33.484 "free_clusters": 237847, 00:31:33.484 "block_size": 512, 00:31:33.484 "cluster_size": 4194304 00:31:33.484 } 00:31:33.484 ]' 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="0419740f-418f-4e79-b49f-46203036403c") .free_clusters' 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="0419740f-418f-4e79-b49f-46203036403c") .cluster_size' 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:33.484 951388 00:31:33.484 23:56:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:34.418 00b907a4-a832-4545-ac81-cefb8bf0d816 00:31:34.418 23:56:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:34.418 23:56:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:34.675 23:56:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:34.933 23:56:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:35.191 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:35.191 fio-3.35 00:31:35.191 Starting 1 thread 00:31:37.716 00:31:37.716 test: (groupid=0, jobs=1): err= 0: pid=295316: Tue Nov 19 23:56:11 2024 00:31:37.716 read: IOPS=5701, BW=22.3MiB/s (23.4MB/s)(45.7MiB/2051msec) 00:31:37.716 slat (nsec): min=1958, max=130204, avg=2510.44, stdev=1917.80 00:31:37.716 clat (usec): min=4460, max=60883, avg=12247.76, stdev=3276.12 00:31:37.716 lat (usec): min=4464, max=60885, avg=12250.28, stdev=3276.08 00:31:37.716 clat percentiles (usec): 00:31:37.716 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[10683], 20.00th=[11207], 00:31:37.716 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:31:37.716 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:31:37.716 | 99.00th=[14484], 99.50th=[50594], 99.90th=[58983], 99.95th=[59507], 00:31:37.716 | 99.99th=[61080] 00:31:37.716 bw ( KiB/s): min=21848, max=23880, per=100.00%, avg=23266.00, stdev=950.65, samples=4 00:31:37.716 iops : min= 5462, max= 5970, avg=5816.50, stdev=237.66, samples=4 00:31:37.716 write: IOPS=5687, BW=22.2MiB/s (23.3MB/s)(45.6MiB/2051msec); 0 zone resets 00:31:37.716 slat (nsec): min=2098, max=95763, avg=2609.44, stdev=1425.03 00:31:37.716 clat (usec): min=2123, max=59492, avg=10066.55, stdev=3506.30 00:31:37.716 lat (usec): min=2129, max=59494, avg=10069.16, stdev=3506.27 00:31:37.716 clat percentiles (usec): 00:31:37.716 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:31:37.716 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:31:37.716 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:31:37.716 | 99.00th=[11863], 99.50th=[50594], 99.90th=[58983], 99.95th=[58983], 00:31:37.716 | 99.99th=[59507] 00:31:37.716 bw ( KiB/s): min=22936, max=23328, per=100.00%, avg=23198.00, stdev=183.06, samples=4 00:31:37.716 iops : min= 5734, max= 5832, avg=5799.50, stdev=45.76, samples=4 00:31:37.716 lat (msec) : 4=0.05%, 10=30.10%, 20=69.31%, 100=0.54% 00:31:37.716 cpu : usr=60.68%, sys=37.76%, ctx=90, majf=0, minf=41 00:31:37.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:37.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:37.716 issued rwts: total=11694,11665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:37.716 00:31:37.716 Run status group 0 (all jobs): 00:31:37.716 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=45.7MiB (47.9MB), run=2051-2051msec 00:31:37.716 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=45.6MiB (47.8MB), run=2051-2051msec 00:31:37.716 23:56:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:37.974 23:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:37.974 23:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:42.151 23:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:42.151 23:56:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:45.436 23:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:45.436 23:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.338 rmmod nvme_tcp 00:31:47.338 rmmod nvme_fabrics 00:31:47.338 rmmod nvme_keyring 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 292486 ']' 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 292486 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 292486 ']' 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 292486 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 292486 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 292486' 00:31:47.338 killing process with pid 292486 00:31:47.338 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 292486 00:31:47.339 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 292486 00:31:47.597 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.597 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.597 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.597 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:47.598 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:31:47.598 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.598 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.598 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.598 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.598 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.598 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.598 23:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.497 23:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.497 00:31:49.497 real 0m37.898s 00:31:49.497 user 2m25.828s 00:31:49.497 sys 0m6.805s 00:31:49.497 23:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.497 23:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.497 ************************************ 00:31:49.497 END TEST nvmf_fio_host 00:31:49.497 ************************************ 00:31:49.497 23:56:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:49.497 23:56:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:49.497 23:56:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.497 23:56:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.497 ************************************ 00:31:49.497 START TEST nvmf_failover 00:31:49.497 ************************************ 00:31:49.497 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:49.756 * Looking for test storage... 00:31:49.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:49.756 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:49.756 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:31:49.756 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:49.756 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:49.756 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:49.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.757 --rc genhtml_branch_coverage=1 00:31:49.757 --rc genhtml_function_coverage=1 00:31:49.757 --rc genhtml_legend=1 00:31:49.757 --rc geninfo_all_blocks=1 00:31:49.757 --rc geninfo_unexecuted_blocks=1 00:31:49.757 00:31:49.757 ' 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:49.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.757 --rc genhtml_branch_coverage=1 00:31:49.757 --rc genhtml_function_coverage=1 00:31:49.757 --rc genhtml_legend=1 00:31:49.757 --rc geninfo_all_blocks=1 00:31:49.757 --rc geninfo_unexecuted_blocks=1 00:31:49.757 00:31:49.757 ' 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:49.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.757 --rc genhtml_branch_coverage=1 00:31:49.757 --rc genhtml_function_coverage=1 00:31:49.757 --rc genhtml_legend=1 00:31:49.757 --rc geninfo_all_blocks=1 00:31:49.757 --rc geninfo_unexecuted_blocks=1 00:31:49.757 00:31:49.757 ' 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:49.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.757 --rc genhtml_branch_coverage=1 00:31:49.757 --rc genhtml_function_coverage=1 00:31:49.757 --rc genhtml_legend=1 00:31:49.757 --rc geninfo_all_blocks=1 00:31:49.757 --rc geninfo_unexecuted_blocks=1 00:31:49.757 00:31:49.757 ' 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:49.757 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:49.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.758 23:56:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:51.659 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:51.659 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:51.659 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:51.660 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:51.660 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.660 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.919 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.919 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.919 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:51.919 23:56:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:51.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:31:51.919 00:31:51.919 --- 10.0.0.2 ping statistics --- 00:31:51.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.919 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:31:51.919 00:31:51.919 --- 10.0.0.1 ping statistics --- 00:31:51.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.919 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=298571 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 298571 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 298571 ']' 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:51.919 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:51.919 [2024-11-19 23:56:26.108425] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:31:51.919 [2024-11-19 23:56:26.108526] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.919 [2024-11-19 23:56:26.188080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:52.177 [2024-11-19 23:56:26.238449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:52.177 [2024-11-19 23:56:26.238505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:52.177 [2024-11-19 23:56:26.238531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:52.177 [2024-11-19 23:56:26.238544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:52.177 [2024-11-19 23:56:26.238556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:52.177 [2024-11-19 23:56:26.240175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:52.177 [2024-11-19 23:56:26.240244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:52.177 [2024-11-19 23:56:26.240247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:52.177 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:52.177 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:52.177 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:52.177 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:52.177 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:52.177 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:52.178 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:52.435 [2024-11-19 23:56:26.623471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:52.435 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:52.693 Malloc0 00:31:52.693 23:56:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:52.950 23:56:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:53.208 23:56:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:53.466 [2024-11-19 23:56:27.733880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.466 23:56:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:53.725 [2024-11-19 23:56:28.006769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:53.725 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:53.983 [2024-11-19 23:56:28.279573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:54.241 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=298864 00:31:54.241 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:54.241 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:54.241 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 298864 /var/tmp/bdevperf.sock 00:31:54.241 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 298864 ']' 00:31:54.241 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:54.241 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.241 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:54.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:54.241 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.241 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:54.499 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.499 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:54.499 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:54.757 NVMe0n1 00:31:54.757 23:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:55.321 00:31:55.321 23:56:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=298999 00:31:55.321 23:56:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:55.321 23:56:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:56.255 23:56:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.512 23:56:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:59.793 23:56:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:00.050 00:32:00.050 23:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:00.308 [2024-11-19 23:56:34.445195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 [2024-11-19 23:56:34.445962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf5a0 is same with the state(6) to be set 00:32:00.309 23:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:03.590 23:56:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.590 [2024-11-19 23:56:37.746746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.590 23:56:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:04.523 23:56:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:04.782 [2024-11-19 23:56:39.079394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc04d0 is same with the state(6) to be set 00:32:04.782 [2024-11-19 23:56:39.079464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc04d0 is same with the state(6) to be set 00:32:04.782 [2024-11-19 23:56:39.079480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc04d0 is same with the state(6) to be set 00:32:04.782 [2024-11-19 23:56:39.079506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc04d0 is same with the state(6) to be set 00:32:04.782 [2024-11-19 23:56:39.079519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc04d0 is same with the state(6) to be set 00:32:04.782 [2024-11-19 23:56:39.079531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc04d0 is same with the state(6) to be set 00:32:04.782 [2024-11-19 23:56:39.079543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc04d0 is same with the state(6) to be set 00:32:04.782 [2024-11-19 23:56:39.079554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc04d0 is same with the state(6) to be set 00:32:05.038 23:56:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 298999 00:32:10.360 { 00:32:10.360 "results": [ 00:32:10.360 { 00:32:10.360 "job": "NVMe0n1", 00:32:10.360 "core_mask": "0x1", 00:32:10.360 "workload": "verify", 00:32:10.360 "status": "finished", 00:32:10.360 "verify_range": { 00:32:10.360 "start": 0, 00:32:10.360 "length": 16384 00:32:10.360 }, 00:32:10.360 "queue_depth": 128, 00:32:10.360 "io_size": 4096, 00:32:10.360 "runtime": 15.01342, 00:32:10.360 "iops": 8250.951482074037, 00:32:10.360 "mibps": 32.23027922685171, 00:32:10.360 "io_failed": 12236, 00:32:10.360 "io_timeout": 0, 00:32:10.360 "avg_latency_us": 14091.353968016847, 00:32:10.360 "min_latency_us": 591.6444444444444, 00:32:10.360 "max_latency_us": 48545.18518518518 00:32:10.360 } 00:32:10.360 ], 00:32:10.360 "core_count": 1 00:32:10.360 } 00:32:10.360 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 298864 00:32:10.360 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 298864 ']' 00:32:10.360 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 298864 00:32:10.360 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:10.360 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:10.360 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 298864 00:32:10.360 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:10.360 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:10.360 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 298864' 00:32:10.360 killing process with pid 298864 00:32:10.360 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 298864 00:32:10.360 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 298864 00:32:10.632 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:10.632 [2024-11-19 23:56:28.344885] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:32:10.632 [2024-11-19 23:56:28.344982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid298864 ] 00:32:10.632 [2024-11-19 23:56:28.412597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.632 [2024-11-19 23:56:28.458520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.632 Running I/O for 15 seconds... 00:32:10.632 8614.00 IOPS, 33.65 MiB/s [2024-11-19T22:56:44.944Z] [2024-11-19 23:56:30.750349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.632 [2024-11-19 23:56:30.750433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.750972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.750987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.751000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.632 [2024-11-19 23:56:30.751014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.632 [2024-11-19 23:56:30.751026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.751975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.751989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.752003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.752016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.752030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.752042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.752080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.752096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.752111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.752124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.752139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.752152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.633 [2024-11-19 23:56:30.752167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.633 [2024-11-19 23:56:30.752181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.634 [2024-11-19 23:56:30.752795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.752979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.752993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.753007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.753020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.753034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.753047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.753062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.753106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.753124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.753137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.753152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.753166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.753180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.753193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.753208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.753221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.753236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.753249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.753264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.753283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.753299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.753312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.753327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.634 [2024-11-19 23:56:30.753340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.634 [2024-11-19 23:56:30.753372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.753407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84208 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.753421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.753479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.635 [2024-11-19 23:56:30.753516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.753532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.635 [2024-11-19 23:56:30.753544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.753558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.635 [2024-11-19 23:56:30.753570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.753584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.635 [2024-11-19 23:56:30.753597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.753610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f603b0 is same with the state(6) to be set 00:32:10.635 [2024-11-19 23:56:30.753845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.753864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.753876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84216 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.753889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.753904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.753916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.753926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84224 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.753939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.753952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.753962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.753973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84232 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.753985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.753998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84240 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84248 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84256 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84264 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84272 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84280 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84288 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84296 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84304 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84312 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84320 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84328 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84336 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84344 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84352 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84360 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.635 [2024-11-19 23:56:30.754814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.635 [2024-11-19 23:56:30.754824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84368 len:8 PRP1 0x0 PRP2 0x0 00:32:10.635 [2024-11-19 23:56:30.754840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.635 [2024-11-19 23:56:30.754855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.754866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.754877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84376 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.754889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.754901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.754913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.754924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84384 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.754936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.754948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.754959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.754969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84392 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.754981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.754994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.755005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.755021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84400 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.755034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.755047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.755057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.755075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84408 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.755105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.755119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.755130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.755141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84416 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.755153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.755166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.755177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.755188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84424 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.755201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.755214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.755225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.755240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84432 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.755254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.755267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.755278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.755289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83416 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.755301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.755314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.755325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.755336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83432 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.755348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.755361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.755371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.755397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83440 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.755410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.770611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.770638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.770653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83448 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.770682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.770697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.770707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.770717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83456 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.770729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.770741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.770751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.770762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83464 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.770774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.770785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.770795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.770805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83472 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.770816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.770830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.770847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.770858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83480 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.770871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.770883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.770893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.770904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83488 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.770916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.770928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.770938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.770949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83496 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.770962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.770974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.636 [2024-11-19 23:56:30.770985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.636 [2024-11-19 23:56:30.770995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83504 len:8 PRP1 0x0 PRP2 0x0 00:32:10.636 [2024-11-19 23:56:30.771007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.636 [2024-11-19 23:56:30.771020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83512 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83520 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83528 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83536 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83544 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83552 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83560 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83568 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83576 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83584 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83592 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83600 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83608 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83616 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83624 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83632 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83640 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83648 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.771963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.771974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83656 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.771986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.771999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.772009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.772023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83664 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.772036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.772065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.772085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.772097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83672 len:8 PRP1 0x0 PRP2 0x0 00:32:10.637 [2024-11-19 23:56:30.772109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.637 [2024-11-19 23:56:30.772143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.637 [2024-11-19 23:56:30.772155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.637 [2024-11-19 23:56:30.772166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83680 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83688 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83696 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83704 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83712 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83720 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83728 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83736 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83744 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83752 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83760 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83768 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83776 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83784 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83792 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83800 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83808 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.772962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.772975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.772985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.772995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83816 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.773008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.773020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.773030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.773040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83824 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.773067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.773090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.773129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.773142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83832 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.773155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.773170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.773181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.773192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83840 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.773205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.773218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.773229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.773245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83848 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.773258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.773272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.773283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.773295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83856 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.773308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.773321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.638 [2024-11-19 23:56:30.773333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.638 [2024-11-19 23:56:30.773344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83864 len:8 PRP1 0x0 PRP2 0x0 00:32:10.638 [2024-11-19 23:56:30.773357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.638 [2024-11-19 23:56:30.773384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83872 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83880 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83888 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83896 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83904 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83912 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83920 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83928 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83936 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83944 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83952 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83960 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.773955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.773965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83968 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.773977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.773993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.774004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.774014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83976 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.774026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.774038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.774064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.774085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83984 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.774131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.774146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.774157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.774168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83992 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.774181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.774193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.774204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.774215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84000 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.774227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.783162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.783191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.783205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84008 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.783219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.783232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.783244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.783255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84016 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.783268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.783281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.783291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.783303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84024 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.783316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.783329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.783342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.783352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84032 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.783371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.783385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.783396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.783406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84040 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.783434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.783448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.639 [2024-11-19 23:56:30.783458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.639 [2024-11-19 23:56:30.783468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84048 len:8 PRP1 0x0 PRP2 0x0 00:32:10.639 [2024-11-19 23:56:30.783482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.639 [2024-11-19 23:56:30.783494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.783504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.783515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84056 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.783526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.783539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.783549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.783560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83424 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.783572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.783584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.783594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.783605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84064 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.783617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.783629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.783639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.783659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84072 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.783670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.783683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.783693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.783704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84080 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.783725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.783737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.783747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.783761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84088 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.783774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.783786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.783797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.783808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84096 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.783819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.783831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.783842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.783852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84104 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.783864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.783877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.783888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.783899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84112 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.783911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.783923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.783941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.783951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84120 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.783963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.783976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.783985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.783996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84128 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.784008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.784020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.784030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.784041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84136 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.784077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.784093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.784104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.784131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84144 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.784144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.784161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.784172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.784184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84152 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.784196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.784209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.784220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.784231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84160 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.784244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.784257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.784268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.784279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84168 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.784291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.784304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.784315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.784327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84176 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.784339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.784368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.784379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.784390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84184 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.784403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.784431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.784441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.784452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84192 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.784464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.784476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.784486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.784496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84200 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.784509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.784521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.640 [2024-11-19 23:56:30.784531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.640 [2024-11-19 23:56:30.784541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84208 len:8 PRP1 0x0 PRP2 0x0 00:32:10.640 [2024-11-19 23:56:30.784557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:30.784622] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:10.640 [2024-11-19 23:56:30.784648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:10.640 [2024-11-19 23:56:30.784710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f603b0 (9): Bad file descriptor 00:32:10.640 [2024-11-19 23:56:30.787914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:10.640 [2024-11-19 23:56:30.945758] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:10.640 7764.50 IOPS, 30.33 MiB/s [2024-11-19T22:56:44.952Z] 7967.67 IOPS, 31.12 MiB/s [2024-11-19T22:56:44.952Z] 8095.50 IOPS, 31.62 MiB/s [2024-11-19T22:56:44.952Z] [2024-11-19 23:56:34.445840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.640 [2024-11-19 23:56:34.445885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.640 [2024-11-19 23:56:34.445904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.641 [2024-11-19 23:56:34.445919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.445934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.641 [2024-11-19 23:56:34.445947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.445961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.641 [2024-11-19 23:56:34.445975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.445988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f603b0 is same with the state(6) to be set 00:32:10.641 [2024-11-19 23:56:34.447961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.447987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.641 [2024-11-19 23:56:34.448285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.641 [2024-11-19 23:56:34.448891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.641 [2024-11-19 23:56:34.448906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.448920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.448934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.448948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.448962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.448975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.448991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.449973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.449987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.450000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.450015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.450031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.450046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.450059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.642 [2024-11-19 23:56:34.450097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.642 [2024-11-19 23:56:34.450119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.643 [2024-11-19 23:56:34.450206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.643 [2024-11-19 23:56:34.450234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.643 [2024-11-19 23:56:34.450268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.643 [2024-11-19 23:56:34.450297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.643 [2024-11-19 23:56:34.450325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.643 [2024-11-19 23:56:34.450353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.643 [2024-11-19 23:56:34.450382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.450982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.450996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.451010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.451023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.451037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.451056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.451071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.451108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.451125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.451138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.451153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.451167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.451185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.451199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.451214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.643 [2024-11-19 23:56:34.451233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.451265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.643 [2024-11-19 23:56:34.451282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109664 len:8 PRP1 0x0 PRP2 0x0 00:32:10.643 [2024-11-19 23:56:34.451295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.643 [2024-11-19 23:56:34.451313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.643 [2024-11-19 23:56:34.451325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.643 [2024-11-19 23:56:34.451337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109672 len:8 PRP1 0x0 PRP2 0x0 00:32:10.643 [2024-11-19 23:56:34.451349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109680 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109688 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109696 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109704 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109712 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109720 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109728 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109736 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109744 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109752 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109760 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109768 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.451955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.451965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.451979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109776 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.451992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.452006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.452016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.452027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109784 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.452039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.452057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.452091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.452105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109792 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.452119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.452133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.452144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.452155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109800 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.452167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.452180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.452192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.452202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109808 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.452215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.452227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.644 [2024-11-19 23:56:34.452239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.644 [2024-11-19 23:56:34.452250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109816 len:8 PRP1 0x0 PRP2 0x0 00:32:10.644 [2024-11-19 23:56:34.452263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:34.452330] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:10.644 [2024-11-19 23:56:34.452349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:10.644 [2024-11-19 23:56:34.455622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:10.644 [2024-11-19 23:56:34.455662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f603b0 (9): Bad file descriptor 00:32:10.644 7976.20 IOPS, 31.16 MiB/s [2024-11-19T22:56:44.956Z] [2024-11-19 23:56:34.567370] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:10.644 8069.33 IOPS, 31.52 MiB/s [2024-11-19T22:56:44.956Z] 8148.71 IOPS, 31.83 MiB/s [2024-11-19T22:56:44.956Z] 8218.50 IOPS, 32.10 MiB/s [2024-11-19T22:56:44.956Z] 8261.22 IOPS, 32.27 MiB/s [2024-11-19T22:56:44.956Z] [2024-11-19 23:56:39.079681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.644 [2024-11-19 23:56:39.079722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:39.079756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.644 [2024-11-19 23:56:39.079772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:39.079788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.644 [2024-11-19 23:56:39.079801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:39.079816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.644 [2024-11-19 23:56:39.079830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:39.079844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.644 [2024-11-19 23:56:39.079857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.644 [2024-11-19 23:56:39.079871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.644 [2024-11-19 23:56:39.079885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.079915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.079928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.079941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.079954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.079970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.645 [2024-11-19 23:56:39.079983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.079997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.080977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.645 [2024-11-19 23:56:39.080990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.645 [2024-11-19 23:56:39.081003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.646 [2024-11-19 23:56:39.081290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.646 [2024-11-19 23:56:39.081317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.646 [2024-11-19 23:56:39.081345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.646 [2024-11-19 23:56:39.081386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.646 [2024-11-19 23:56:39.081414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.646 [2024-11-19 23:56:39.081456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.646 [2024-11-19 23:56:39.081483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.646 [2024-11-19 23:56:39.081931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.081977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.646 [2024-11-19 23:56:39.081998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64472 len:8 PRP1 0x0 PRP2 0x0 00:32:10.646 [2024-11-19 23:56:39.082012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.082059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.646 [2024-11-19 23:56:39.082088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.082105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.646 [2024-11-19 23:56:39.082119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.082133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.646 [2024-11-19 23:56:39.082146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.646 [2024-11-19 23:56:39.082159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.647 [2024-11-19 23:56:39.082172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f603b0 is same with the state(6) to be set 00:32:10.647 [2024-11-19 23:56:39.082375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64480 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.082421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64488 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.082488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64496 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.082534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64504 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.082580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64512 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.082630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64520 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.082676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64528 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.082722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64536 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.082768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64544 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.082815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64552 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.082861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64560 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.082906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64568 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.082951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.082964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.082978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.082989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64576 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.083002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.083015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.083025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.083036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64584 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.083048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.083086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.083098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.083109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64592 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.083122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.083135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.083146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.083156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64600 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.083169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.083182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.083193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.083203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64608 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.083216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.083228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.083239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.083250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64616 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.083262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.083275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.083286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.083296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64624 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.083314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.083327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.083338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.083348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64632 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.083360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.083377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.083388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.083414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64640 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.083427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.083440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.083450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.083461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64648 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.083472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.647 [2024-11-19 23:56:39.083485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.647 [2024-11-19 23:56:39.083496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.647 [2024-11-19 23:56:39.083507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64656 len:8 PRP1 0x0 PRP2 0x0 00:32:10.647 [2024-11-19 23:56:39.083520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.083532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.083543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.083553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64664 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.083566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.083579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.083589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.083600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64672 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.083620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.083634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.083645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.083656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64680 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.083668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.083681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.083691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.083703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64688 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.083716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.083728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.083739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.083750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63736 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.083766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.083780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.083791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.083801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63744 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.083814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.083826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.083837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.083848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63752 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.083861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.083873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.083884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.083895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63760 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.083908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.083921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.083931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.083941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63768 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.083953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.083966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.083977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.083988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63776 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63784 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63792 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63800 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63808 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63816 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63824 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63832 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63840 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63848 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63856 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63864 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63872 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.648 [2024-11-19 23:56:39.084651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.648 [2024-11-19 23:56:39.084662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63880 len:8 PRP1 0x0 PRP2 0x0 00:32:10.648 [2024-11-19 23:56:39.084674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.648 [2024-11-19 23:56:39.084687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.084703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.084713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63888 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.084726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.084739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.084749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.084760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63896 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.084772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.084784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.084794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.084805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63904 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.084823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.084836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.084846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.084857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63912 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.084869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.084882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.084893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.084903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63920 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.084915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.084930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.084942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.084952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63928 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.084964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.084977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.084987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.084998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63936 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.085010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.085024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.085034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.085060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63944 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.085078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.085094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.085111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.085123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63952 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.085135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.085147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.085158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.085169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63960 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.085181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.085194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.100456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.100513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63968 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.100530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.100545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.100556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.100568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63976 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.100580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.100593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.100603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.100614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63984 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.100632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.100660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.100670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.100681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63672 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.100693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.100705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.100716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.100726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63992 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.100738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.100749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.100759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.100769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64000 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.100782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.100794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.100806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.100816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64008 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.100829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.100841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.100852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.100862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64016 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.100874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.100886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.100896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.100907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64024 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.100920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.100933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.100943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.100953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64032 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.100965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.100977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.100991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.101002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64040 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.101014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.101027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.101037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.101062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64048 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.101088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.101102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.101138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.649 [2024-11-19 23:56:39.101149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64056 len:8 PRP1 0x0 PRP2 0x0 00:32:10.649 [2024-11-19 23:56:39.101162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.649 [2024-11-19 23:56:39.101176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.649 [2024-11-19 23:56:39.101188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64064 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64072 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64080 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64088 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64096 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64104 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64112 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64120 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64128 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64136 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64144 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64152 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64160 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64168 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64176 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64184 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.101957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.101968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.101978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64192 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.101990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.102002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.102012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.102022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64200 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.102035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.102047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.102082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.102094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64208 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.102107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.102136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.102147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.102158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64216 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.102171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.102184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.650 [2024-11-19 23:56:39.102195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.650 [2024-11-19 23:56:39.102210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64224 len:8 PRP1 0x0 PRP2 0x0 00:32:10.650 [2024-11-19 23:56:39.102223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.650 [2024-11-19 23:56:39.102237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64232 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64240 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64248 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64256 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64264 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64272 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64280 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64288 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64296 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64304 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64312 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64320 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64328 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64336 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63680 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.102957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.102968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.102983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63688 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.102996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.103008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.103018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.103028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63696 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.103039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.103066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.103085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.103096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63704 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.103109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.103122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.103132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.103144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63712 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.103156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.103175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.103185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.103196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63720 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.103209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.103221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.103232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.103242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63728 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.103254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.103267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.103277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.103288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64344 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.103300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.103313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.103323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.651 [2024-11-19 23:56:39.120712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64352 len:8 PRP1 0x0 PRP2 0x0 00:32:10.651 [2024-11-19 23:56:39.120748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.651 [2024-11-19 23:56:39.120766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.651 [2024-11-19 23:56:39.120777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.120790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64360 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.120803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.120817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.120828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.120840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64368 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.120852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.120865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.120875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.120886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64376 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.120899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.120913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.120924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.120936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64384 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.120948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.120962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.120973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.120984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64392 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.120997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.121011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.121023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.121034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64400 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.121046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.121058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.121094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.121114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64408 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.121127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.121141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.121157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.121171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64416 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.121185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.121198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.121208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.121220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64424 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.121233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.121248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.121259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.121271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64432 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.121283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.121296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.121307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.121319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64440 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.121332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.121346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.121358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.121370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64448 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.121383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.121396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.121408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.121421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64456 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.121435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.121448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.121459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.121470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64464 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.121483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.121497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.652 [2024-11-19 23:56:39.121509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.652 [2024-11-19 23:56:39.121520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64472 len:8 PRP1 0x0 PRP2 0x0 00:32:10.652 [2024-11-19 23:56:39.121534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.652 [2024-11-19 23:56:39.121625] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:10.652 [2024-11-19 23:56:39.121645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:10.652 [2024-11-19 23:56:39.121707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f603b0 (9): Bad file descriptor 00:32:10.652 [2024-11-19 23:56:39.125034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:10.652 [2024-11-19 23:56:39.151221] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:10.652 8223.30 IOPS, 32.12 MiB/s [2024-11-19T22:56:44.964Z] 8238.00 IOPS, 32.18 MiB/s [2024-11-19T22:56:44.964Z] 8230.58 IOPS, 32.15 MiB/s [2024-11-19T22:56:44.964Z] 8237.08 IOPS, 32.18 MiB/s [2024-11-19T22:56:44.964Z] 8245.86 IOPS, 32.21 MiB/s [2024-11-19T22:56:44.964Z] 8249.87 IOPS, 32.23 MiB/s 00:32:10.652 Latency(us) 00:32:10.652 [2024-11-19T22:56:44.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.652 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:10.652 Verification LBA range: start 0x0 length 0x4000 00:32:10.652 NVMe0n1 : 15.01 8250.95 32.23 815.00 0.00 14091.35 591.64 48545.19 00:32:10.652 [2024-11-19T22:56:44.964Z] =================================================================================================================== 00:32:10.652 [2024-11-19T22:56:44.964Z] Total : 8250.95 32.23 815.00 0.00 14091.35 591.64 48545.19 00:32:10.652 Received shutdown signal, test time was about 15.000000 seconds 00:32:10.652 00:32:10.652 Latency(us) 00:32:10.652 [2024-11-19T22:56:44.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.652 [2024-11-19T22:56:44.964Z] =================================================================================================================== 00:32:10.652 [2024-11-19T22:56:44.964Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:10.652 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:10.652 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:10.652 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:10.652 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=300831 00:32:10.652 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:10.652 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 300831 /var/tmp/bdevperf.sock 00:32:10.652 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 300831 ']' 00:32:10.652 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:10.652 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:10.652 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:10.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:10.652 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:10.652 23:56:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:10.911 23:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:10.911 23:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:10.911 23:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:11.169 [2024-11-19 23:56:45.362010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:11.169 23:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:11.427 [2024-11-19 23:56:45.638747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:11.427 23:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:11.992 NVMe0n1 00:32:11.992 23:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:12.249 00:32:12.249 23:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:12.815 00:32:12.815 23:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:12.815 23:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:13.073 23:56:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:13.335 23:56:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:16.621 23:56:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:16.621 23:56:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:16.621 23:56:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=301504 00:32:16.621 23:56:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:16.621 23:56:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 301504 00:32:17.995 { 00:32:17.995 "results": [ 00:32:17.995 { 00:32:17.995 "job": "NVMe0n1", 00:32:17.995 "core_mask": "0x1", 00:32:17.995 "workload": "verify", 00:32:17.995 "status": "finished", 00:32:17.995 "verify_range": { 00:32:17.995 "start": 0, 00:32:17.995 "length": 16384 00:32:17.995 }, 00:32:17.995 "queue_depth": 128, 00:32:17.995 "io_size": 4096, 00:32:17.995 "runtime": 1.00482, 00:32:17.995 "iops": 8688.123245954499, 00:32:17.996 "mibps": 33.93798142950976, 00:32:17.996 "io_failed": 0, 00:32:17.996 "io_timeout": 0, 00:32:17.996 "avg_latency_us": 14670.16972109796, 00:32:17.996 "min_latency_us": 1341.0607407407408, 00:32:17.996 "max_latency_us": 12184.841481481482 00:32:17.996 } 00:32:17.996 ], 00:32:17.996 "core_count": 1 00:32:17.996 } 00:32:17.996 23:56:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:17.996 [2024-11-19 23:56:44.888857] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:32:17.996 [2024-11-19 23:56:44.888941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300831 ] 00:32:17.996 [2024-11-19 23:56:44.956851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.996 [2024-11-19 23:56:45.001898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.996 [2024-11-19 23:56:47.477190] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:17.996 [2024-11-19 23:56:47.477273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.996 [2024-11-19 23:56:47.477297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.996 [2024-11-19 23:56:47.477313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.996 [2024-11-19 23:56:47.477327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.996 [2024-11-19 23:56:47.477342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.996 [2024-11-19 23:56:47.477364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.996 [2024-11-19 23:56:47.477378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.996 [2024-11-19 23:56:47.477392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.996 [2024-11-19 23:56:47.477413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:17.996 [2024-11-19 23:56:47.477460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:17.996 [2024-11-19 23:56:47.477493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23793b0 (9): Bad file descriptor 00:32:17.996 [2024-11-19 23:56:47.526549] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:17.996 Running I/O for 1 seconds... 00:32:17.996 8602.00 IOPS, 33.60 MiB/s 00:32:17.996 Latency(us) 00:32:17.996 [2024-11-19T22:56:52.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.996 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:17.996 Verification LBA range: start 0x0 length 0x4000 00:32:17.996 NVMe0n1 : 1.00 8688.12 33.94 0.00 0.00 14670.17 1341.06 12184.84 00:32:17.996 [2024-11-19T22:56:52.308Z] =================================================================================================================== 00:32:17.996 [2024-11-19T22:56:52.308Z] Total : 8688.12 33.94 0.00 0.00 14670.17 1341.06 12184.84 00:32:17.996 23:56:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:17.996 23:56:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:17.996 23:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:18.253 23:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:18.253 23:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:18.511 23:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:18.784 23:56:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 300831 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 300831 ']' 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 300831 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 300831 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 300831' 00:32:22.065 killing process with pid 300831 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 300831 00:32:22.065 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 300831 00:32:22.322 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:22.322 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:22.580 rmmod nvme_tcp 00:32:22.580 rmmod nvme_fabrics 00:32:22.580 rmmod nvme_keyring 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 298571 ']' 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 298571 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 298571 ']' 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 298571 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 298571 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 298571' 00:32:22.580 killing process with pid 298571 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 298571 00:32:22.580 23:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 298571 00:32:22.840 23:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:22.840 23:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:22.840 23:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:22.840 23:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:22.840 23:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:22.840 23:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:22.840 23:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:22.840 23:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:22.840 23:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:22.840 23:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.840 23:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.840 23:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.372 00:32:25.372 real 0m35.335s 00:32:25.372 user 2m5.074s 00:32:25.372 sys 0m5.815s 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:25.372 ************************************ 00:32:25.372 END TEST nvmf_failover 00:32:25.372 ************************************ 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.372 ************************************ 00:32:25.372 START TEST nvmf_host_discovery 00:32:25.372 ************************************ 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:25.372 * Looking for test storage... 00:32:25.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.372 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.373 --rc genhtml_branch_coverage=1 00:32:25.373 --rc genhtml_function_coverage=1 00:32:25.373 --rc genhtml_legend=1 00:32:25.373 --rc geninfo_all_blocks=1 00:32:25.373 --rc geninfo_unexecuted_blocks=1 00:32:25.373 00:32:25.373 ' 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.373 --rc genhtml_branch_coverage=1 00:32:25.373 --rc genhtml_function_coverage=1 00:32:25.373 --rc genhtml_legend=1 00:32:25.373 --rc geninfo_all_blocks=1 00:32:25.373 --rc geninfo_unexecuted_blocks=1 00:32:25.373 00:32:25.373 ' 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.373 --rc genhtml_branch_coverage=1 00:32:25.373 --rc genhtml_function_coverage=1 00:32:25.373 --rc genhtml_legend=1 00:32:25.373 --rc geninfo_all_blocks=1 00:32:25.373 --rc geninfo_unexecuted_blocks=1 00:32:25.373 00:32:25.373 ' 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.373 --rc genhtml_branch_coverage=1 00:32:25.373 --rc genhtml_function_coverage=1 00:32:25.373 --rc genhtml_legend=1 00:32:25.373 --rc geninfo_all_blocks=1 00:32:25.373 --rc geninfo_unexecuted_blocks=1 00:32:25.373 00:32:25.373 ' 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:25.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:25.373 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:25.374 23:56:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.273 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:27.273 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:27.273 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:27.273 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:27.273 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:27.273 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:27.273 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:27.273 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:27.273 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:27.273 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:27.274 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:27.274 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:27.274 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:27.274 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:27.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:27.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:32:27.274 00:32:27.274 --- 10.0.0.2 ping statistics --- 00:32:27.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.274 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:27.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:27.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:32:27.274 00:32:27.274 --- 10.0.0.1 ping statistics --- 00:32:27.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.274 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:27.274 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=304203 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 304203 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 304203 ']' 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.275 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.275 [2024-11-19 23:57:01.511826] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:32:27.275 [2024-11-19 23:57:01.511936] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.533 [2024-11-19 23:57:01.595475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.533 [2024-11-19 23:57:01.643472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.533 [2024-11-19 23:57:01.643542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.533 [2024-11-19 23:57:01.643559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.533 [2024-11-19 23:57:01.643573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.533 [2024-11-19 23:57:01.643584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.533 [2024-11-19 23:57:01.644264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.533 [2024-11-19 23:57:01.793541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.533 [2024-11-19 23:57:01.801781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.533 null0 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.533 null1 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=304352 00:32:27.533 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:27.534 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 304352 /tmp/host.sock 00:32:27.534 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 304352 ']' 00:32:27.534 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:27.534 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.534 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:27.534 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:27.534 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.534 23:57:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.792 [2024-11-19 23:57:01.877492] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:32:27.792 [2024-11-19 23:57:01.877561] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid304352 ] 00:32:27.792 [2024-11-19 23:57:01.948215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.792 [2024-11-19 23:57:01.997192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:28.051 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.309 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:28.309 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:28.309 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.309 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.309 [2024-11-19 23:57:02.391329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.309 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.309 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:28.309 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:28.309 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.309 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:28.309 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:28.310 23:57:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:28.875 [2024-11-19 23:57:03.182214] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:28.875 [2024-11-19 23:57:03.182240] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:28.875 [2024-11-19 23:57:03.182262] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:29.133 [2024-11-19 23:57:03.268551] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:29.391 [2024-11-19 23:57:03.490883] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:29.391 [2024-11-19 23:57:03.491910] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xbef1b0:1 started. 00:32:29.391 [2024-11-19 23:57:03.493887] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:29.391 [2024-11-19 23:57:03.493912] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:29.391 [2024-11-19 23:57:03.499871] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xbef1b0 was disconnected and freed. delete nvme_qpair. 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:29.391 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:29.651 [2024-11-19 23:57:03.753552] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xbef900:1 started. 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:29.651 [2024-11-19 23:57:03.760716] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xbef900 was disconnected and freed. delete nvme_qpair. 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.651 [2024-11-19 23:57:03.839800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:29.651 [2024-11-19 23:57:03.840125] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:29.651 [2024-11-19 23:57:03.840154] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.651 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.652 [2024-11-19 23:57:03.927059] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:29.652 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.910 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:29.910 23:57:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:29.910 [2024-11-19 23:57:04.187584] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:29.910 [2024-11-19 23:57:04.187645] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:29.910 [2024-11-19 23:57:04.187664] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:29.910 [2024-11-19 23:57:04.187674] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:30.845 23:57:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:30.845 23:57:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:30.845 23:57:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:30.845 23:57:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:30.845 23:57:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:30.845 23:57:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.845 23:57:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:30.845 23:57:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.845 23:57:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:30.845 23:57:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.845 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.845 [2024-11-19 23:57:05.064096] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:30.845 [2024-11-19 23:57:05.064152] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:30.845 [2024-11-19 23:57:05.065131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.845 [2024-11-19 23:57:05.065173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.845 [2024-11-19 23:57:05.065191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.845 [2024-11-19 23:57:05.065205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.845 [2024-11-19 23:57:05.065220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.845 [2024-11-19 23:57:05.065233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.845 [2024-11-19 23:57:05.065248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.845 [2024-11-19 23:57:05.065261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.846 [2024-11-19 23:57:05.065274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc11f0 is same with the state(6) to be set 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:30.846 [2024-11-19 23:57:05.075262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc11f0 (9): Bad file descriptor 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.846 [2024-11-19 23:57:05.085299] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:30.846 [2024-11-19 23:57:05.085324] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:30.846 [2024-11-19 23:57:05.085336] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:30.846 [2024-11-19 23:57:05.085345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:30.846 [2024-11-19 23:57:05.085395] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:30.846 [2024-11-19 23:57:05.085670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.846 [2024-11-19 23:57:05.085701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc11f0 with addr=10.0.0.2, port=4420 00:32:30.846 [2024-11-19 23:57:05.085719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc11f0 is same with the state(6) to be set 00:32:30.846 [2024-11-19 23:57:05.085749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc11f0 (9): Bad file descriptor 00:32:30.846 [2024-11-19 23:57:05.085771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:30.846 [2024-11-19 23:57:05.085786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:30.846 [2024-11-19 23:57:05.085802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:30.846 [2024-11-19 23:57:05.085816] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:30.846 [2024-11-19 23:57:05.085827] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:30.846 [2024-11-19 23:57:05.085836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:30.846 [2024-11-19 23:57:05.095425] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:30.846 [2024-11-19 23:57:05.095462] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:30.846 [2024-11-19 23:57:05.095472] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:30.846 [2024-11-19 23:57:05.095480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:30.846 [2024-11-19 23:57:05.095519] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:30.846 [2024-11-19 23:57:05.095692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.846 [2024-11-19 23:57:05.095720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc11f0 with addr=10.0.0.2, port=4420 00:32:30.846 [2024-11-19 23:57:05.095738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc11f0 is same with the state(6) to be set 00:32:30.846 [2024-11-19 23:57:05.095760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc11f0 (9): Bad file descriptor 00:32:30.846 [2024-11-19 23:57:05.095781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:30.846 [2024-11-19 23:57:05.095796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:30.846 [2024-11-19 23:57:05.095809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:30.846 [2024-11-19 23:57:05.095821] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:30.846 [2024-11-19 23:57:05.095831] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:30.846 [2024-11-19 23:57:05.095839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:30.846 [2024-11-19 23:57:05.105552] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:30.846 [2024-11-19 23:57:05.105572] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:30.846 [2024-11-19 23:57:05.105581] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:30.846 [2024-11-19 23:57:05.105588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:30.846 [2024-11-19 23:57:05.105611] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:30.846 [2024-11-19 23:57:05.105817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.846 [2024-11-19 23:57:05.105845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc11f0 with addr=10.0.0.2, port=4420 00:32:30.846 [2024-11-19 23:57:05.105867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc11f0 is same with the state(6) to be set 00:32:30.846 [2024-11-19 23:57:05.105890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc11f0 (9): Bad file descriptor 00:32:30.846 [2024-11-19 23:57:05.105911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:30.846 [2024-11-19 23:57:05.105926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:30.846 [2024-11-19 23:57:05.105939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:30.846 [2024-11-19 23:57:05.105951] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:30.846 [2024-11-19 23:57:05.105960] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:30.846 [2024-11-19 23:57:05.105967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:30.846 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:30.846 [2024-11-19 23:57:05.116037] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:30.846 [2024-11-19 23:57:05.116061] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:30.846 [2024-11-19 23:57:05.116080] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:30.846 [2024-11-19 23:57:05.116104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:30.846 [2024-11-19 23:57:05.116137] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:30.846 [2024-11-19 23:57:05.116253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.846 [2024-11-19 23:57:05.116283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc11f0 with addr=10.0.0.2, port=4420 00:32:30.846 [2024-11-19 23:57:05.116300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc11f0 is same with the state(6) to be set 00:32:30.846 [2024-11-19 23:57:05.116322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc11f0 (9): Bad file descriptor 00:32:30.846 [2024-11-19 23:57:05.116343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:30.846 [2024-11-19 23:57:05.116377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:30.846 [2024-11-19 23:57:05.116392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:30.846 [2024-11-19 23:57:05.116404] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:30.846 [2024-11-19 23:57:05.116413] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:30.846 [2024-11-19 23:57:05.116420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:30.846 [2024-11-19 23:57:05.126171] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:30.846 [2024-11-19 23:57:05.126194] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:30.846 [2024-11-19 23:57:05.126203] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:30.846 [2024-11-19 23:57:05.126211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:30.847 [2024-11-19 23:57:05.126237] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:30.847 [2024-11-19 23:57:05.126377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.847 [2024-11-19 23:57:05.126405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc11f0 with addr=10.0.0.2, port=4420 00:32:30.847 [2024-11-19 23:57:05.126429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc11f0 is same with the state(6) to be set 00:32:30.847 [2024-11-19 23:57:05.126452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc11f0 (9): Bad file descriptor 00:32:30.847 [2024-11-19 23:57:05.126472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:30.847 [2024-11-19 23:57:05.126487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:30.847 [2024-11-19 23:57:05.126500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:30.847 [2024-11-19 23:57:05.126512] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:30.847 [2024-11-19 23:57:05.126521] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:30.847 [2024-11-19 23:57:05.126529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:30.847 [2024-11-19 23:57:05.136272] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:30.847 [2024-11-19 23:57:05.136294] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:30.847 [2024-11-19 23:57:05.136303] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:30.847 [2024-11-19 23:57:05.136311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:30.847 [2024-11-19 23:57:05.136336] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:30.847 [2024-11-19 23:57:05.136520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.847 [2024-11-19 23:57:05.136548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc11f0 with addr=10.0.0.2, port=4420 00:32:30.847 [2024-11-19 23:57:05.136565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc11f0 is same with the state(6) to be set 00:32:30.847 [2024-11-19 23:57:05.136587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc11f0 (9): Bad file descriptor 00:32:30.847 [2024-11-19 23:57:05.136613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:30.847 [2024-11-19 23:57:05.136628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:30.847 [2024-11-19 23:57:05.136641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:30.847 [2024-11-19 23:57:05.136654] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:30.847 [2024-11-19 23:57:05.136663] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:30.847 [2024-11-19 23:57:05.136670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:30.847 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.847 [2024-11-19 23:57:05.146371] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:30.847 [2024-11-19 23:57:05.146393] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:30.847 [2024-11-19 23:57:05.146402] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:30.847 [2024-11-19 23:57:05.146409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:30.847 [2024-11-19 23:57:05.146448] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:30.847 [2024-11-19 23:57:05.146652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:30.847 [2024-11-19 23:57:05.146680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc11f0 with addr=10.0.0.2, port=4420 00:32:30.847 [2024-11-19 23:57:05.146696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc11f0 is same with the state(6) to be set 00:32:30.847 [2024-11-19 23:57:05.146719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc11f0 (9): Bad file descriptor 00:32:30.847 [2024-11-19 23:57:05.146740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:30.847 [2024-11-19 23:57:05.146754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:30.847 [2024-11-19 23:57:05.146767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:30.847 [2024-11-19 23:57:05.146780] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:30.847 [2024-11-19 23:57:05.146789] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:30.847 [2024-11-19 23:57:05.146796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:30.847 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:30.847 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:30.847 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:30.847 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:30.847 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:30.847 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:30.847 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:30.847 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:31.106 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:31.106 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:31.106 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.106 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:31.106 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:31.106 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:31.106 [2024-11-19 23:57:05.156482] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:31.106 [2024-11-19 23:57:05.156505] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:31.106 [2024-11-19 23:57:05.156515] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:31.106 [2024-11-19 23:57:05.156536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:31.106 [2024-11-19 23:57:05.156562] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:31.106 [2024-11-19 23:57:05.156740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.106 [2024-11-19 23:57:05.156767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc11f0 with addr=10.0.0.2, port=4420 00:32:31.106 [2024-11-19 23:57:05.156784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc11f0 is same with the state(6) to be set 00:32:31.106 [2024-11-19 23:57:05.156815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc11f0 (9): Bad file descriptor 00:32:31.106 [2024-11-19 23:57:05.156835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:31.106 [2024-11-19 23:57:05.156850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:31.106 [2024-11-19 23:57:05.156864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:31.106 [2024-11-19 23:57:05.156877] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:31.106 [2024-11-19 23:57:05.156885] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:31.106 [2024-11-19 23:57:05.156893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:31.106 [2024-11-19 23:57:05.166596] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:31.106 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.106 [2024-11-19 23:57:05.166618] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:31.106 [2024-11-19 23:57:05.166628] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:31.106 [2024-11-19 23:57:05.166636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:31.106 [2024-11-19 23:57:05.166660] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:31.106 [2024-11-19 23:57:05.166855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.106 [2024-11-19 23:57:05.166883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc11f0 with addr=10.0.0.2, port=4420 00:32:31.106 [2024-11-19 23:57:05.166900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc11f0 is same with the state(6) to be set 00:32:31.106 [2024-11-19 23:57:05.166931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc11f0 (9): Bad file descriptor 00:32:31.106 [2024-11-19 23:57:05.166958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:31.106 [2024-11-19 23:57:05.166973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:31.106 [2024-11-19 23:57:05.166986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:31.106 [2024-11-19 23:57:05.166999] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:31.106 [2024-11-19 23:57:05.167008] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:31.106 [2024-11-19 23:57:05.167016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:31.106 [2024-11-19 23:57:05.176694] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:31.106 [2024-11-19 23:57:05.176715] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:31.106 [2024-11-19 23:57:05.176724] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:31.106 [2024-11-19 23:57:05.176731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:31.106 [2024-11-19 23:57:05.176754] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:31.106 [2024-11-19 23:57:05.176932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.106 [2024-11-19 23:57:05.176959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc11f0 with addr=10.0.0.2, port=4420 00:32:31.106 [2024-11-19 23:57:05.176976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc11f0 is same with the state(6) to be set 00:32:31.106 [2024-11-19 23:57:05.176998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc11f0 (9): Bad file descriptor 00:32:31.106 [2024-11-19 23:57:05.177019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:31.106 [2024-11-19 23:57:05.177033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:31.106 [2024-11-19 23:57:05.177046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:31.106 [2024-11-19 23:57:05.177067] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:31.106 [2024-11-19 23:57:05.177086] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:31.106 [2024-11-19 23:57:05.177094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:31.106 [2024-11-19 23:57:05.186789] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:31.106 [2024-11-19 23:57:05.186809] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:31.106 [2024-11-19 23:57:05.186818] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:31.106 [2024-11-19 23:57:05.186825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:31.106 [2024-11-19 23:57:05.186848] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:31.106 [2024-11-19 23:57:05.187099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.106 [2024-11-19 23:57:05.187128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc11f0 with addr=10.0.0.2, port=4420 00:32:31.106 [2024-11-19 23:57:05.187145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc11f0 is same with the state(6) to be set 00:32:31.106 [2024-11-19 23:57:05.187167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc11f0 (9): Bad file descriptor 00:32:31.106 [2024-11-19 23:57:05.187194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:31.106 [2024-11-19 23:57:05.187209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:31.106 [2024-11-19 23:57:05.187223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:31.106 [2024-11-19 23:57:05.187235] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:31.106 [2024-11-19 23:57:05.187244] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:31.106 [2024-11-19 23:57:05.187252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:31.106 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:32:31.106 23:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:31.106 [2024-11-19 23:57:05.191316] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:31.106 [2024-11-19 23:57:05.191349] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:32.042 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.300 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:32.301 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:32.301 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.301 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:32.301 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:32.301 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:32.301 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:32.301 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:32.301 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.301 23:57:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.235 [2024-11-19 23:57:07.473214] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:33.235 [2024-11-19 23:57:07.473242] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:33.235 [2024-11-19 23:57:07.473268] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:33.493 [2024-11-19 23:57:07.560547] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:33.751 [2024-11-19 23:57:07.867197] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:33.751 [2024-11-19 23:57:07.867972] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xbf1140:1 started. 00:32:33.751 [2024-11-19 23:57:07.870352] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:33.751 [2024-11-19 23:57:07.870402] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:33.751 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.751 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:33.751 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:33.751 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:33.751 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:33.751 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.751 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:33.751 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.751 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:33.751 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.752 request: 00:32:33.752 { 00:32:33.752 "name": "nvme", 00:32:33.752 "trtype": "tcp", 00:32:33.752 "traddr": "10.0.0.2", 00:32:33.752 "adrfam": "ipv4", 00:32:33.752 "trsvcid": "8009", 00:32:33.752 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:33.752 "wait_for_attach": true, 00:32:33.752 "method": "bdev_nvme_start_discovery", 00:32:33.752 "req_id": 1 00:32:33.752 } 00:32:33.752 Got JSON-RPC error response 00:32:33.752 response: 00:32:33.752 { 00:32:33.752 "code": -17, 00:32:33.752 "message": "File exists" 00:32:33.752 } 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.752 [2024-11-19 23:57:07.912821] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xbf1140 was disconnected and freed. delete nvme_qpair. 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.752 request: 00:32:33.752 { 00:32:33.752 "name": "nvme_second", 00:32:33.752 "trtype": "tcp", 00:32:33.752 "traddr": "10.0.0.2", 00:32:33.752 "adrfam": "ipv4", 00:32:33.752 "trsvcid": "8009", 00:32:33.752 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:33.752 "wait_for_attach": true, 00:32:33.752 "method": "bdev_nvme_start_discovery", 00:32:33.752 "req_id": 1 00:32:33.752 } 00:32:33.752 Got JSON-RPC error response 00:32:33.752 response: 00:32:33.752 { 00:32:33.752 "code": -17, 00:32:33.752 "message": "File exists" 00:32:33.752 } 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:33.752 23:57:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.752 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:33.752 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:33.752 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.752 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:33.752 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.752 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.752 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:33.752 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:33.752 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.011 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:34.011 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:34.011 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:34.011 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:34.011 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:34.011 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:34.011 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:34.011 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:34.011 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:34.011 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.011 23:57:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.945 [2024-11-19 23:57:09.081758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.945 [2024-11-19 23:57:09.081812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbee00 with addr=10.0.0.2, port=8010 00:32:34.945 [2024-11-19 23:57:09.081865] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:34.945 [2024-11-19 23:57:09.081881] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:34.945 [2024-11-19 23:57:09.081893] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:35.878 [2024-11-19 23:57:10.084338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.878 [2024-11-19 23:57:10.084425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbee00 with addr=10.0.0.2, port=8010 00:32:35.878 [2024-11-19 23:57:10.084469] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:35.878 [2024-11-19 23:57:10.084485] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:35.878 [2024-11-19 23:57:10.084499] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:36.811 [2024-11-19 23:57:11.086462] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:36.811 request: 00:32:36.811 { 00:32:36.811 "name": "nvme_second", 00:32:36.811 "trtype": "tcp", 00:32:36.811 "traddr": "10.0.0.2", 00:32:36.811 "adrfam": "ipv4", 00:32:36.811 "trsvcid": "8010", 00:32:36.811 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:36.811 "wait_for_attach": false, 00:32:36.811 "attach_timeout_ms": 3000, 00:32:36.811 "method": "bdev_nvme_start_discovery", 00:32:36.811 "req_id": 1 00:32:36.811 } 00:32:36.811 Got JSON-RPC error response 00:32:36.811 response: 00:32:36.811 { 00:32:36.811 "code": -110, 00:32:36.811 "message": "Connection timed out" 00:32:36.811 } 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:36.811 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 304352 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:37.069 rmmod nvme_tcp 00:32:37.069 rmmod nvme_fabrics 00:32:37.069 rmmod nvme_keyring 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 304203 ']' 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 304203 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 304203 ']' 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 304203 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 304203 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 304203' 00:32:37.069 killing process with pid 304203 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 304203 00:32:37.069 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 304203 00:32:37.328 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:37.328 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:37.328 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:37.328 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:37.328 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:37.328 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:37.328 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:37.328 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:37.328 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:37.328 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.328 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.328 23:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.244 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:39.244 00:32:39.244 real 0m14.293s 00:32:39.244 user 0m21.305s 00:32:39.244 sys 0m2.807s 00:32:39.244 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.244 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.244 ************************************ 00:32:39.244 END TEST nvmf_host_discovery 00:32:39.244 ************************************ 00:32:39.244 23:57:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:39.244 23:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:39.244 23:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.244 23:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.244 ************************************ 00:32:39.244 START TEST nvmf_host_multipath_status 00:32:39.244 ************************************ 00:32:39.244 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:39.503 * Looking for test storage... 00:32:39.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:39.503 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:39.503 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:32:39.503 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:39.503 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:39.503 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.503 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.503 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.503 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.503 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.503 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.503 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.503 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:39.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.504 --rc genhtml_branch_coverage=1 00:32:39.504 --rc genhtml_function_coverage=1 00:32:39.504 --rc genhtml_legend=1 00:32:39.504 --rc geninfo_all_blocks=1 00:32:39.504 --rc geninfo_unexecuted_blocks=1 00:32:39.504 00:32:39.504 ' 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:39.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.504 --rc genhtml_branch_coverage=1 00:32:39.504 --rc genhtml_function_coverage=1 00:32:39.504 --rc genhtml_legend=1 00:32:39.504 --rc geninfo_all_blocks=1 00:32:39.504 --rc geninfo_unexecuted_blocks=1 00:32:39.504 00:32:39.504 ' 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:39.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.504 --rc genhtml_branch_coverage=1 00:32:39.504 --rc genhtml_function_coverage=1 00:32:39.504 --rc genhtml_legend=1 00:32:39.504 --rc geninfo_all_blocks=1 00:32:39.504 --rc geninfo_unexecuted_blocks=1 00:32:39.504 00:32:39.504 ' 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:39.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.504 --rc genhtml_branch_coverage=1 00:32:39.504 --rc genhtml_function_coverage=1 00:32:39.504 --rc genhtml_legend=1 00:32:39.504 --rc geninfo_all_blocks=1 00:32:39.504 --rc geninfo_unexecuted_blocks=1 00:32:39.504 00:32:39.504 ' 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:39.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:39.504 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:39.505 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:39.505 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.505 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.505 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.505 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:39.505 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:39.505 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.505 23:57:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:42.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:42.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.036 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:42.037 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:42.037 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:42.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:42.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:32:42.037 00:32:42.037 --- 10.0.0.2 ping statistics --- 00:32:42.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.037 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:42.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:42.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:32:42.037 00:32:42.037 --- 10.0.0.1 ping statistics --- 00:32:42.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.037 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=308039 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 308039 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 308039 ']' 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:42.037 23:57:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:42.037 [2024-11-19 23:57:15.980281] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:32:42.038 [2024-11-19 23:57:15.980376] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.038 [2024-11-19 23:57:16.055942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:42.038 [2024-11-19 23:57:16.104459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:42.038 [2024-11-19 23:57:16.104524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:42.038 [2024-11-19 23:57:16.104540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:42.038 [2024-11-19 23:57:16.104553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:42.038 [2024-11-19 23:57:16.104565] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:42.038 [2024-11-19 23:57:16.106079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.038 [2024-11-19 23:57:16.106085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.038 23:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:42.038 23:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:42.038 23:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:42.038 23:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:42.038 23:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:42.038 23:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:42.038 23:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=308039 00:32:42.038 23:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:42.296 [2024-11-19 23:57:16.509596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:42.296 23:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:42.862 Malloc0 00:32:42.862 23:57:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:42.863 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:43.435 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.435 [2024-11-19 23:57:17.705873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.435 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:43.766 [2024-11-19 23:57:17.970614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:43.766 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=308322 00:32:43.766 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:43.766 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:43.766 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 308322 /var/tmp/bdevperf.sock 00:32:43.766 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 308322 ']' 00:32:43.766 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:43.766 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.766 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:43.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:43.766 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.766 23:57:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:44.063 23:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.063 23:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:44.063 23:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:44.321 23:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:44.887 Nvme0n1 00:32:44.887 23:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:45.451 Nvme0n1 00:32:45.451 23:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:45.451 23:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:47.353 23:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:47.353 23:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:47.611 23:57:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:48.177 23:57:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:49.112 23:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:49.112 23:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:49.112 23:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.112 23:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:49.371 23:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.371 23:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:49.371 23:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.371 23:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:49.629 23:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:49.629 23:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:49.629 23:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.629 23:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:49.887 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.887 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:49.887 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.887 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:50.146 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.146 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:50.146 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.146 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:50.404 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.404 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:50.404 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.404 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:50.663 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.663 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:50.663 23:57:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:50.922 23:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:51.180 23:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:52.554 23:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:52.554 23:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:52.554 23:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.554 23:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:52.554 23:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:52.554 23:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:52.554 23:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.554 23:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:52.813 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.813 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:52.813 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.813 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:53.071 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.071 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:53.071 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.071 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:53.329 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.329 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:53.329 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.329 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:53.587 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.587 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:53.587 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.587 23:57:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:53.845 23:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.845 23:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:53.845 23:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:54.103 23:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:54.669 23:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:55.602 23:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:55.602 23:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:55.602 23:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.602 23:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:55.859 23:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.859 23:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:55.859 23:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.859 23:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:56.117 23:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:56.117 23:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:56.117 23:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.117 23:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:56.375 23:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.375 23:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:56.375 23:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.375 23:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:56.633 23:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.633 23:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:56.633 23:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.633 23:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:56.891 23:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.891 23:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:56.891 23:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.891 23:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:57.149 23:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.149 23:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:57.149 23:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:57.407 23:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:57.664 23:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:59.036 23:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:59.036 23:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:59.036 23:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.036 23:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:59.036 23:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.036 23:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:59.036 23:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.036 23:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:59.293 23:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:59.293 23:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:59.293 23:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.293 23:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:59.551 23:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.551 23:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:59.551 23:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.551 23:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:59.809 23:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.809 23:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:59.809 23:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.809 23:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:00.066 23:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.066 23:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:00.066 23:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.066 23:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:00.632 23:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:00.632 23:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:00.632 23:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:00.632 23:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:00.890 23:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:02.259 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:02.259 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:02.259 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.259 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:02.259 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:02.259 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:02.259 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.259 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:02.517 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:02.517 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:02.517 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.517 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:02.775 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.775 23:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:02.775 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.775 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:03.033 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.033 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:03.033 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.033 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:03.291 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:03.291 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:03.291 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.291 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:03.550 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:03.550 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:03.550 23:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:03.808 23:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:04.066 23:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:05.441 23:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:05.441 23:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:05.441 23:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.441 23:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:05.441 23:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:05.441 23:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:05.441 23:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.441 23:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:05.699 23:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.699 23:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:05.699 23:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.699 23:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:05.957 23:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.957 23:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:05.957 23:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.957 23:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:06.215 23:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.215 23:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:06.215 23:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.215 23:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:06.473 23:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:06.473 23:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:06.473 23:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.473 23:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:06.731 23:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.731 23:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:07.296 23:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:07.296 23:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:07.296 23:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:07.554 23:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:08.929 23:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:08.929 23:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:08.929 23:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.929 23:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:08.929 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.929 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:08.929 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.929 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:09.188 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.188 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:09.188 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.188 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:09.446 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.446 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:09.446 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.446 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:09.704 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.704 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:09.704 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.704 23:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:09.961 23:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.961 23:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:09.961 23:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.962 23:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:10.220 23:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.220 23:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:10.220 23:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:10.786 23:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:10.786 23:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:12.162 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:12.162 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:12.162 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.162 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:12.162 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:12.162 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:12.162 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.162 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:12.420 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.420 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:12.420 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.420 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:12.678 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.678 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:12.678 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.678 23:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:12.936 23:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.936 23:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:12.936 23:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.936 23:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:13.194 23:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.194 23:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:13.194 23:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.194 23:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:13.452 23:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.452 23:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:13.452 23:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:14.018 23:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:14.018 23:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:15.430 23:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:15.430 23:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:15.430 23:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.430 23:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:15.430 23:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.430 23:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:15.430 23:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.430 23:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:15.712 23:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.712 23:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:15.712 23:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.712 23:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:15.970 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.970 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:15.970 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.970 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:16.229 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.229 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:16.229 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.229 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:16.487 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.488 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:16.488 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.488 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:16.745 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.745 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:16.745 23:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:17.003 23:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:17.263 23:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:18.644 23:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:18.644 23:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:18.644 23:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.644 23:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:18.644 23:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.644 23:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:18.644 23:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.644 23:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:18.902 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:18.902 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:18.902 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.902 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:19.160 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.160 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:19.160 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.160 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:19.419 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.419 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:19.419 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.419 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:19.678 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.678 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:19.678 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.678 23:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:19.936 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.936 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 308322 00:33:19.936 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 308322 ']' 00:33:19.936 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 308322 00:33:19.936 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:19.936 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:19.936 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 308322 00:33:20.199 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:20.199 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:20.199 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 308322' 00:33:20.199 killing process with pid 308322 00:33:20.199 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 308322 00:33:20.199 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 308322 00:33:20.199 { 00:33:20.199 "results": [ 00:33:20.199 { 00:33:20.199 "job": "Nvme0n1", 00:33:20.199 "core_mask": "0x4", 00:33:20.199 "workload": "verify", 00:33:20.199 "status": "terminated", 00:33:20.199 "verify_range": { 00:33:20.199 "start": 0, 00:33:20.199 "length": 16384 00:33:20.199 }, 00:33:20.199 "queue_depth": 128, 00:33:20.199 "io_size": 4096, 00:33:20.199 "runtime": 34.43999, 00:33:20.199 "iops": 7948.579543722283, 00:33:20.199 "mibps": 31.04913884266517, 00:33:20.199 "io_failed": 0, 00:33:20.199 "io_timeout": 0, 00:33:20.199 "avg_latency_us": 16077.562012565446, 00:33:20.199 "min_latency_us": 256.37925925925924, 00:33:20.199 "max_latency_us": 4026531.84 00:33:20.199 } 00:33:20.199 ], 00:33:20.199 "core_count": 1 00:33:20.199 } 00:33:20.199 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 308322 00:33:20.199 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:20.199 [2024-11-19 23:57:18.031569] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:33:20.199 [2024-11-19 23:57:18.031645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308322 ] 00:33:20.199 [2024-11-19 23:57:18.099153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.199 [2024-11-19 23:57:18.151468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:20.199 Running I/O for 90 seconds... 00:33:20.199 8302.00 IOPS, 32.43 MiB/s [2024-11-19T22:57:54.511Z] 8509.50 IOPS, 33.24 MiB/s [2024-11-19T22:57:54.511Z] 8503.33 IOPS, 33.22 MiB/s [2024-11-19T22:57:54.511Z] 8507.25 IOPS, 33.23 MiB/s [2024-11-19T22:57:54.511Z] 8527.80 IOPS, 33.31 MiB/s [2024-11-19T22:57:54.511Z] 8534.67 IOPS, 33.34 MiB/s [2024-11-19T22:57:54.511Z] 8565.43 IOPS, 33.46 MiB/s [2024-11-19T22:57:54.511Z] 8586.50 IOPS, 33.54 MiB/s [2024-11-19T22:57:54.511Z] 8580.67 IOPS, 33.52 MiB/s [2024-11-19T22:57:54.511Z] 8574.60 IOPS, 33.49 MiB/s [2024-11-19T22:57:54.511Z] 8557.27 IOPS, 33.43 MiB/s [2024-11-19T22:57:54.511Z] 8554.00 IOPS, 33.41 MiB/s [2024-11-19T22:57:54.511Z] 8550.69 IOPS, 33.40 MiB/s [2024-11-19T22:57:54.511Z] 8547.07 IOPS, 33.39 MiB/s [2024-11-19T22:57:54.511Z] 8547.27 IOPS, 33.39 MiB/s [2024-11-19T22:57:54.511Z] [2024-11-19 23:57:34.885460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.199 [2024-11-19 23:57:34.885530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:20.199 [2024-11-19 23:57:34.885607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.199 [2024-11-19 23:57:34.885629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:20.199 [2024-11-19 23:57:34.885652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.199 [2024-11-19 23:57:34.885669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:20.199 [2024-11-19 23:57:34.885691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.199 [2024-11-19 23:57:34.885707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:20.199 [2024-11-19 23:57:34.885729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.199 [2024-11-19 23:57:34.885745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:20.199 [2024-11-19 23:57:34.885767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.199 [2024-11-19 23:57:34.885783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:20.199 [2024-11-19 23:57:34.885804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.199 [2024-11-19 23:57:34.885825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:20.199 [2024-11-19 23:57:34.885847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.199 [2024-11-19 23:57:34.885863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:20.199 [2024-11-19 23:57:34.885884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.199 [2024-11-19 23:57:34.885900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:20.199 [2024-11-19 23:57:34.885934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.199 [2024-11-19 23:57:34.885951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:20.199 [2024-11-19 23:57:34.885973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.199 [2024-11-19 23:57:34.885989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.886970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.886993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.200 [2024-11-19 23:57:34.887340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.200 [2024-11-19 23:57:34.887380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.200 [2024-11-19 23:57:34.887434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.200 [2024-11-19 23:57:34.887473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.200 [2024-11-19 23:57:34.887512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.200 [2024-11-19 23:57:34.887549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.200 [2024-11-19 23:57:34.887587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.200 [2024-11-19 23:57:34.887950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:20.200 [2024-11-19 23:57:34.887977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.887994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.888966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.888991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.889007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.889050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.889102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.889148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.889191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.889232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.889274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.889316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.201 [2024-11-19 23:57:34.889357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.201 [2024-11-19 23:57:34.889400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.201 [2024-11-19 23:57:34.889442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.201 [2024-11-19 23:57:34.889491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.201 [2024-11-19 23:57:34.889535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.201 [2024-11-19 23:57:34.889576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.201 [2024-11-19 23:57:34.889618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.201 [2024-11-19 23:57:34.889661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:20.201 [2024-11-19 23:57:34.889686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.201 [2024-11-19 23:57:34.889701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.889727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.889743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.889769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.889785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.889810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.889826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.889852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.889868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.889893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.889909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.889934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.889951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.889976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.889996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.890968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.890984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.891010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.891026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.891055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.891080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.891108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.891125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.891150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.891166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.891191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.891207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.891232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.891248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.891273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.891289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.891314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.202 [2024-11-19 23:57:34.891330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:20.202 [2024-11-19 23:57:34.891356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-11-19 23:57:34.891371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:34.891397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.203 [2024-11-19 23:57:34.891414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:20.203 8073.56 IOPS, 31.54 MiB/s [2024-11-19T22:57:54.515Z] 7598.65 IOPS, 29.68 MiB/s [2024-11-19T22:57:54.515Z] 7176.50 IOPS, 28.03 MiB/s [2024-11-19T22:57:54.515Z] 6798.79 IOPS, 26.56 MiB/s [2024-11-19T22:57:54.515Z] 6843.70 IOPS, 26.73 MiB/s [2024-11-19T22:57:54.515Z] 6928.81 IOPS, 27.07 MiB/s [2024-11-19T22:57:54.515Z] 7018.00 IOPS, 27.41 MiB/s [2024-11-19T22:57:54.515Z] 7176.22 IOPS, 28.03 MiB/s [2024-11-19T22:57:54.515Z] 7320.58 IOPS, 28.60 MiB/s [2024-11-19T22:57:54.515Z] 7457.60 IOPS, 29.13 MiB/s [2024-11-19T22:57:54.515Z] 7501.62 IOPS, 29.30 MiB/s [2024-11-19T22:57:54.515Z] 7545.85 IOPS, 29.48 MiB/s [2024-11-19T22:57:54.515Z] 7579.39 IOPS, 29.61 MiB/s [2024-11-19T22:57:54.515Z] 7646.83 IOPS, 29.87 MiB/s [2024-11-19T22:57:54.515Z] 7753.23 IOPS, 30.29 MiB/s [2024-11-19T22:57:54.515Z] 7848.55 IOPS, 30.66 MiB/s [2024-11-19T22:57:54.515Z] [2024-11-19 23:57:51.532770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.532846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.532911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.532947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.532973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.532990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.533539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.533556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.534763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.534792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.534819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.534837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.534859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.534881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.534905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.534922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.534943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.534959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.534981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.534998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.535019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.535035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.535057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.535082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.535107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.535124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.535151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.535168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:20.203 [2024-11-19 23:57:51.535189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.203 [2024-11-19 23:57:51.535205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.204 [2024-11-19 23:57:51.535695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.204 [2024-11-19 23:57:51.535732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.204 [2024-11-19 23:57:51.535770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.204 [2024-11-19 23:57:51.535808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.204 [2024-11-19 23:57:51.535846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.535979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.535995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.536016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.536032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.536054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.536082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.537964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:20.204 [2024-11-19 23:57:51.537986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.204 [2024-11-19 23:57:51.538001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:20.205 [2024-11-19 23:57:51.538023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.205 [2024-11-19 23:57:51.538038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:20.205 [2024-11-19 23:57:51.538060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.205 [2024-11-19 23:57:51.538086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:20.205 7913.75 IOPS, 30.91 MiB/s [2024-11-19T22:57:54.517Z] 7931.12 IOPS, 30.98 MiB/s [2024-11-19T22:57:54.517Z] 7945.59 IOPS, 31.04 MiB/s [2024-11-19T22:57:54.517Z] Received shutdown signal, test time was about 34.440811 seconds 00:33:20.205 00:33:20.205 Latency(us) 00:33:20.205 [2024-11-19T22:57:54.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.205 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:20.205 Verification LBA range: start 0x0 length 0x4000 00:33:20.205 Nvme0n1 : 34.44 7948.58 31.05 0.00 0.00 16077.56 256.38 4026531.84 00:33:20.205 [2024-11-19T22:57:54.517Z] =================================================================================================================== 00:33:20.205 [2024-11-19T22:57:54.517Z] Total : 7948.58 31.05 0.00 0.00 16077.56 256.38 4026531.84 00:33:20.205 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.771 rmmod nvme_tcp 00:33:20.771 rmmod nvme_fabrics 00:33:20.771 rmmod nvme_keyring 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 308039 ']' 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 308039 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 308039 ']' 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 308039 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 308039 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 308039' 00:33:20.771 killing process with pid 308039 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 308039 00:33:20.771 23:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 308039 00:33:21.031 23:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:21.031 23:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:21.031 23:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:21.031 23:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:21.031 23:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:21.031 23:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:21.031 23:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:21.031 23:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:21.031 23:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:21.031 23:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.031 23:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.031 23:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.936 23:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.936 00:33:22.936 real 0m43.639s 00:33:22.936 user 2m12.941s 00:33:22.936 sys 0m10.793s 00:33:22.936 23:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.936 23:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:22.936 ************************************ 00:33:22.936 END TEST nvmf_host_multipath_status 00:33:22.936 ************************************ 00:33:22.936 23:57:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:22.936 23:57:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:22.936 23:57:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.936 23:57:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.936 ************************************ 00:33:22.936 START TEST nvmf_discovery_remove_ifc 00:33:22.936 ************************************ 00:33:22.936 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:23.195 * Looking for test storage... 00:33:23.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:23.195 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.196 --rc genhtml_branch_coverage=1 00:33:23.196 --rc genhtml_function_coverage=1 00:33:23.196 --rc genhtml_legend=1 00:33:23.196 --rc geninfo_all_blocks=1 00:33:23.196 --rc geninfo_unexecuted_blocks=1 00:33:23.196 00:33:23.196 ' 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.196 --rc genhtml_branch_coverage=1 00:33:23.196 --rc genhtml_function_coverage=1 00:33:23.196 --rc genhtml_legend=1 00:33:23.196 --rc geninfo_all_blocks=1 00:33:23.196 --rc geninfo_unexecuted_blocks=1 00:33:23.196 00:33:23.196 ' 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.196 --rc genhtml_branch_coverage=1 00:33:23.196 --rc genhtml_function_coverage=1 00:33:23.196 --rc genhtml_legend=1 00:33:23.196 --rc geninfo_all_blocks=1 00:33:23.196 --rc geninfo_unexecuted_blocks=1 00:33:23.196 00:33:23.196 ' 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.196 --rc genhtml_branch_coverage=1 00:33:23.196 --rc genhtml_function_coverage=1 00:33:23.196 --rc genhtml_legend=1 00:33:23.196 --rc geninfo_all_blocks=1 00:33:23.196 --rc geninfo_unexecuted_blocks=1 00:33:23.196 00:33:23.196 ' 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:23.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.196 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.197 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:23.197 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:23.197 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:23.197 23:57:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:25.098 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:25.098 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.098 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:25.099 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:25.099 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:25.099 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:25.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:33:25.357 00:33:25.357 --- 10.0.0.2 ping statistics --- 00:33:25.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.357 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:33:25.357 00:33:25.357 --- 10.0.0.1 ping statistics --- 00:33:25.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.357 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=314795 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 314795 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 314795 ']' 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.357 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:25.358 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.358 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:25.358 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.358 [2024-11-19 23:57:59.538304] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:33:25.358 [2024-11-19 23:57:59.538405] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.358 [2024-11-19 23:57:59.615105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.358 [2024-11-19 23:57:59.662430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.358 [2024-11-19 23:57:59.662506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.358 [2024-11-19 23:57:59.662520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.358 [2024-11-19 23:57:59.662530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.358 [2024-11-19 23:57:59.662540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.358 [2024-11-19 23:57:59.663167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.616 [2024-11-19 23:57:59.821159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.616 [2024-11-19 23:57:59.829384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:25.616 null0 00:33:25.616 [2024-11-19 23:57:59.861261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=314815 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 314815 /tmp/host.sock 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 314815 ']' 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:25.616 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:25.616 23:57:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.874 [2024-11-19 23:57:59.932203] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:33:25.874 [2024-11-19 23:57:59.932302] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314815 ] 00:33:25.874 [2024-11-19 23:58:00.004594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.874 [2024-11-19 23:58:00.057677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.132 23:58:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:27.067 [2024-11-19 23:58:01.348728] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:27.067 [2024-11-19 23:58:01.348759] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:27.067 [2024-11-19 23:58:01.348791] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:27.325 [2024-11-19 23:58:01.476310] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:27.583 [2024-11-19 23:58:01.699612] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:27.583 [2024-11-19 23:58:01.700657] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xc38c00:1 started. 00:33:27.583 [2024-11-19 23:58:01.702524] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:27.583 [2024-11-19 23:58:01.702587] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:27.583 [2024-11-19 23:58:01.702631] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:27.583 [2024-11-19 23:58:01.702660] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:27.583 [2024-11-19 23:58:01.702692] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:27.583 [2024-11-19 23:58:01.707760] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xc38c00 was disconnected and freed. delete nvme_qpair. 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.583 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:27.584 23:58:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:28.959 23:58:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:28.959 23:58:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:28.959 23:58:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:28.959 23:58:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.959 23:58:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:28.959 23:58:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:28.959 23:58:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:28.959 23:58:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.959 23:58:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:28.959 23:58:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:29.892 23:58:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:29.892 23:58:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:29.892 23:58:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:29.892 23:58:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.892 23:58:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:29.892 23:58:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:29.892 23:58:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:29.892 23:58:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.892 23:58:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:29.892 23:58:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:30.823 23:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:30.823 23:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:30.823 23:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:30.823 23:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.823 23:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:30.823 23:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.823 23:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:30.823 23:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.823 23:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:30.823 23:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:31.757 23:58:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:31.757 23:58:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:31.757 23:58:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:31.757 23:58:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.757 23:58:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:31.757 23:58:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.757 23:58:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:31.757 23:58:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.757 23:58:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:31.757 23:58:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:33.130 23:58:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:33.130 23:58:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.130 23:58:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:33.130 23:58:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.130 23:58:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:33.130 23:58:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.130 23:58:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:33.130 23:58:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.130 23:58:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:33.130 23:58:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:33.130 [2024-11-19 23:58:07.143751] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:33.130 [2024-11-19 23:58:07.143830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.130 [2024-11-19 23:58:07.143855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.130 [2024-11-19 23:58:07.143878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.130 [2024-11-19 23:58:07.143894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.130 [2024-11-19 23:58:07.143911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.130 [2024-11-19 23:58:07.143927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.130 [2024-11-19 23:58:07.143943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.130 [2024-11-19 23:58:07.143959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.130 [2024-11-19 23:58:07.143975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:33.130 [2024-11-19 23:58:07.143991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:33.130 [2024-11-19 23:58:07.144007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc15400 is same with the state(6) to be set 00:33:33.130 [2024-11-19 23:58:07.153765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc15400 (9): Bad file descriptor 00:33:33.130 [2024-11-19 23:58:07.163811] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:33.130 [2024-11-19 23:58:07.163839] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:33.130 [2024-11-19 23:58:07.163852] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:33.130 [2024-11-19 23:58:07.163862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:33.130 [2024-11-19 23:58:07.163906] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:34.063 23:58:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.063 23:58:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.063 23:58:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.063 23:58:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.063 23:58:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.063 23:58:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.063 23:58:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.063 [2024-11-19 23:58:08.191119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:34.063 [2024-11-19 23:58:08.191198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc15400 with addr=10.0.0.2, port=4420 00:33:34.063 [2024-11-19 23:58:08.191229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc15400 is same with the state(6) to be set 00:33:34.063 [2024-11-19 23:58:08.191288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc15400 (9): Bad file descriptor 00:33:34.063 [2024-11-19 23:58:08.191825] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:34.063 [2024-11-19 23:58:08.191877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:34.063 [2024-11-19 23:58:08.191896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:34.063 [2024-11-19 23:58:08.191914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:34.063 [2024-11-19 23:58:08.191931] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:34.063 [2024-11-19 23:58:08.191945] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:34.063 [2024-11-19 23:58:08.191955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:34.063 [2024-11-19 23:58:08.191973] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:34.063 [2024-11-19 23:58:08.191983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:34.063 23:58:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.063 23:58:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:34.063 23:58:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:34.997 [2024-11-19 23:58:09.194490] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:34.997 [2024-11-19 23:58:09.194545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:34.997 [2024-11-19 23:58:09.194580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:34.997 [2024-11-19 23:58:09.194596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:34.997 [2024-11-19 23:58:09.194614] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:34.997 [2024-11-19 23:58:09.194631] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:34.997 [2024-11-19 23:58:09.194644] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:34.997 [2024-11-19 23:58:09.194654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:34.997 [2024-11-19 23:58:09.194711] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:34.997 [2024-11-19 23:58:09.194771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.997 [2024-11-19 23:58:09.194804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.997 [2024-11-19 23:58:09.194826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.997 [2024-11-19 23:58:09.194843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.997 [2024-11-19 23:58:09.194861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.997 [2024-11-19 23:58:09.194877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.997 [2024-11-19 23:58:09.194894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.997 [2024-11-19 23:58:09.194909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.997 [2024-11-19 23:58:09.194925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.997 [2024-11-19 23:58:09.194941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.997 [2024-11-19 23:58:09.194958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:34.997 [2024-11-19 23:58:09.195023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc04b40 (9): Bad file descriptor 00:33:34.997 [2024-11-19 23:58:09.196003] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:34.997 [2024-11-19 23:58:09.196028] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:34.997 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.997 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.997 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.997 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.997 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.997 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.997 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.997 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.997 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:34.997 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:34.997 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:35.255 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:35.255 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:35.255 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.255 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:35.255 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.255 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:35.255 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.255 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:35.255 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.255 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:35.255 23:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:36.189 23:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:36.189 23:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:36.189 23:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:36.189 23:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.189 23:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.189 23:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:36.189 23:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:36.189 23:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.189 23:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:36.189 23:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:37.120 [2024-11-19 23:58:11.245222] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:37.120 [2024-11-19 23:58:11.245258] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:37.120 [2024-11-19 23:58:11.245283] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:37.120 [2024-11-19 23:58:11.331578] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:37.378 23:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:37.378 23:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.378 23:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:37.378 23:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.378 23:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.378 23:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:37.378 23:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:37.378 23:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.378 23:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:37.378 23:58:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:37.378 [2024-11-19 23:58:11.506774] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:37.378 [2024-11-19 23:58:11.507737] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xc17720:1 started. 00:33:37.378 [2024-11-19 23:58:11.509285] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:37.378 [2024-11-19 23:58:11.509329] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:37.378 [2024-11-19 23:58:11.509391] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:37.378 [2024-11-19 23:58:11.509429] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:37.378 [2024-11-19 23:58:11.509447] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:37.378 [2024-11-19 23:58:11.514064] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xc17720 was disconnected and freed. delete nvme_qpair. 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 314815 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 314815 ']' 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 314815 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314815 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314815' 00:33:38.312 killing process with pid 314815 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 314815 00:33:38.312 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 314815 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:38.570 rmmod nvme_tcp 00:33:38.570 rmmod nvme_fabrics 00:33:38.570 rmmod nvme_keyring 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 314795 ']' 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 314795 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 314795 ']' 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 314795 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.570 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314795 00:33:38.830 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:38.830 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:38.830 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314795' 00:33:38.830 killing process with pid 314795 00:33:38.830 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 314795 00:33:38.830 23:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 314795 00:33:38.830 23:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:38.830 23:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:38.830 23:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:38.830 23:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:38.830 23:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:38.830 23:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:38.830 23:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:38.830 23:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:38.830 23:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:38.830 23:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.830 23:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.830 23:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:41.363 00:33:41.363 real 0m17.928s 00:33:41.363 user 0m26.280s 00:33:41.363 sys 0m2.947s 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.363 ************************************ 00:33:41.363 END TEST nvmf_discovery_remove_ifc 00:33:41.363 ************************************ 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.363 ************************************ 00:33:41.363 START TEST nvmf_identify_kernel_target 00:33:41.363 ************************************ 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:41.363 * Looking for test storage... 00:33:41.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:41.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.363 --rc genhtml_branch_coverage=1 00:33:41.363 --rc genhtml_function_coverage=1 00:33:41.363 --rc genhtml_legend=1 00:33:41.363 --rc geninfo_all_blocks=1 00:33:41.363 --rc geninfo_unexecuted_blocks=1 00:33:41.363 00:33:41.363 ' 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:41.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.363 --rc genhtml_branch_coverage=1 00:33:41.363 --rc genhtml_function_coverage=1 00:33:41.363 --rc genhtml_legend=1 00:33:41.363 --rc geninfo_all_blocks=1 00:33:41.363 --rc geninfo_unexecuted_blocks=1 00:33:41.363 00:33:41.363 ' 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:41.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.363 --rc genhtml_branch_coverage=1 00:33:41.363 --rc genhtml_function_coverage=1 00:33:41.363 --rc genhtml_legend=1 00:33:41.363 --rc geninfo_all_blocks=1 00:33:41.363 --rc geninfo_unexecuted_blocks=1 00:33:41.363 00:33:41.363 ' 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:41.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.363 --rc genhtml_branch_coverage=1 00:33:41.363 --rc genhtml_function_coverage=1 00:33:41.363 --rc genhtml_legend=1 00:33:41.363 --rc geninfo_all_blocks=1 00:33:41.363 --rc geninfo_unexecuted_blocks=1 00:33:41.363 00:33:41.363 ' 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.363 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:41.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:41.364 23:58:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:43.266 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:43.266 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:43.266 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:43.266 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:43.266 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:43.267 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:43.267 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:43.267 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:43.267 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.267 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:43.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:33:43.268 00:33:43.268 --- 10.0.0.2 ping statistics --- 00:33:43.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.268 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:43.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:33:43.268 00:33:43.268 --- 10.0.0.1 ping statistics --- 00:33:43.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.268 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:43.268 23:58:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:44.646 Waiting for block devices as requested 00:33:44.646 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:44.646 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:44.646 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:44.905 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:44.905 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:44.905 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:44.905 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:45.164 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:45.164 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:45.164 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:45.164 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:45.421 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:45.421 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:45.421 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:45.421 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:45.680 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:45.680 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:45.680 No valid GPT data, bailing 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:45.680 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:45.939 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:45.939 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:45.939 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:45.939 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:45.939 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:33:45.939 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:45.939 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:33:45.939 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:33:45.939 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:33:45.939 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:45.939 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:45.939 00:33:45.939 Discovery Log Number of Records 2, Generation counter 2 00:33:45.939 =====Discovery Log Entry 0====== 00:33:45.939 trtype: tcp 00:33:45.939 adrfam: ipv4 00:33:45.939 subtype: current discovery subsystem 00:33:45.939 treq: not specified, sq flow control disable supported 00:33:45.939 portid: 1 00:33:45.939 trsvcid: 4420 00:33:45.939 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:45.939 traddr: 10.0.0.1 00:33:45.939 eflags: none 00:33:45.939 sectype: none 00:33:45.939 =====Discovery Log Entry 1====== 00:33:45.939 trtype: tcp 00:33:45.939 adrfam: ipv4 00:33:45.939 subtype: nvme subsystem 00:33:45.939 treq: not specified, sq flow control disable supported 00:33:45.939 portid: 1 00:33:45.939 trsvcid: 4420 00:33:45.939 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:45.939 traddr: 10.0.0.1 00:33:45.939 eflags: none 00:33:45.939 sectype: none 00:33:45.939 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:45.939 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:45.939 ===================================================== 00:33:45.939 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:45.939 ===================================================== 00:33:45.939 Controller Capabilities/Features 00:33:45.939 ================================ 00:33:45.939 Vendor ID: 0000 00:33:45.939 Subsystem Vendor ID: 0000 00:33:45.939 Serial Number: d5747f04e73fcca53c40 00:33:45.939 Model Number: Linux 00:33:45.939 Firmware Version: 6.8.9-20 00:33:45.939 Recommended Arb Burst: 0 00:33:45.939 IEEE OUI Identifier: 00 00 00 00:33:45.939 Multi-path I/O 00:33:45.939 May have multiple subsystem ports: No 00:33:45.939 May have multiple controllers: No 00:33:45.939 Associated with SR-IOV VF: No 00:33:45.939 Max Data Transfer Size: Unlimited 00:33:45.939 Max Number of Namespaces: 0 00:33:45.939 Max Number of I/O Queues: 1024 00:33:45.939 NVMe Specification Version (VS): 1.3 00:33:45.939 NVMe Specification Version (Identify): 1.3 00:33:45.939 Maximum Queue Entries: 1024 00:33:45.939 Contiguous Queues Required: No 00:33:45.939 Arbitration Mechanisms Supported 00:33:45.939 Weighted Round Robin: Not Supported 00:33:45.939 Vendor Specific: Not Supported 00:33:45.939 Reset Timeout: 7500 ms 00:33:45.939 Doorbell Stride: 4 bytes 00:33:45.939 NVM Subsystem Reset: Not Supported 00:33:45.939 Command Sets Supported 00:33:45.939 NVM Command Set: Supported 00:33:45.939 Boot Partition: Not Supported 00:33:45.939 Memory Page Size Minimum: 4096 bytes 00:33:45.939 Memory Page Size Maximum: 4096 bytes 00:33:45.939 Persistent Memory Region: Not Supported 00:33:45.939 Optional Asynchronous Events Supported 00:33:45.939 Namespace Attribute Notices: Not Supported 00:33:45.939 Firmware Activation Notices: Not Supported 00:33:45.939 ANA Change Notices: Not Supported 00:33:45.939 PLE Aggregate Log Change Notices: Not Supported 00:33:45.939 LBA Status Info Alert Notices: Not Supported 00:33:45.939 EGE Aggregate Log Change Notices: Not Supported 00:33:45.939 Normal NVM Subsystem Shutdown event: Not Supported 00:33:45.939 Zone Descriptor Change Notices: Not Supported 00:33:45.939 Discovery Log Change Notices: Supported 00:33:45.939 Controller Attributes 00:33:45.939 128-bit Host Identifier: Not Supported 00:33:45.939 Non-Operational Permissive Mode: Not Supported 00:33:45.939 NVM Sets: Not Supported 00:33:45.940 Read Recovery Levels: Not Supported 00:33:45.940 Endurance Groups: Not Supported 00:33:45.940 Predictable Latency Mode: Not Supported 00:33:45.940 Traffic Based Keep ALive: Not Supported 00:33:45.940 Namespace Granularity: Not Supported 00:33:45.940 SQ Associations: Not Supported 00:33:45.940 UUID List: Not Supported 00:33:45.940 Multi-Domain Subsystem: Not Supported 00:33:45.940 Fixed Capacity Management: Not Supported 00:33:45.940 Variable Capacity Management: Not Supported 00:33:45.940 Delete Endurance Group: Not Supported 00:33:45.940 Delete NVM Set: Not Supported 00:33:45.940 Extended LBA Formats Supported: Not Supported 00:33:45.940 Flexible Data Placement Supported: Not Supported 00:33:45.940 00:33:45.940 Controller Memory Buffer Support 00:33:45.940 ================================ 00:33:45.940 Supported: No 00:33:45.940 00:33:45.940 Persistent Memory Region Support 00:33:45.940 ================================ 00:33:45.940 Supported: No 00:33:45.940 00:33:45.940 Admin Command Set Attributes 00:33:45.940 ============================ 00:33:45.940 Security Send/Receive: Not Supported 00:33:45.940 Format NVM: Not Supported 00:33:45.940 Firmware Activate/Download: Not Supported 00:33:45.940 Namespace Management: Not Supported 00:33:45.940 Device Self-Test: Not Supported 00:33:45.940 Directives: Not Supported 00:33:45.940 NVMe-MI: Not Supported 00:33:45.940 Virtualization Management: Not Supported 00:33:45.940 Doorbell Buffer Config: Not Supported 00:33:45.940 Get LBA Status Capability: Not Supported 00:33:45.940 Command & Feature Lockdown Capability: Not Supported 00:33:45.940 Abort Command Limit: 1 00:33:45.940 Async Event Request Limit: 1 00:33:45.940 Number of Firmware Slots: N/A 00:33:45.940 Firmware Slot 1 Read-Only: N/A 00:33:45.940 Firmware Activation Without Reset: N/A 00:33:45.940 Multiple Update Detection Support: N/A 00:33:45.940 Firmware Update Granularity: No Information Provided 00:33:45.940 Per-Namespace SMART Log: No 00:33:45.940 Asymmetric Namespace Access Log Page: Not Supported 00:33:45.940 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:45.940 Command Effects Log Page: Not Supported 00:33:45.940 Get Log Page Extended Data: Supported 00:33:45.940 Telemetry Log Pages: Not Supported 00:33:45.940 Persistent Event Log Pages: Not Supported 00:33:45.940 Supported Log Pages Log Page: May Support 00:33:45.940 Commands Supported & Effects Log Page: Not Supported 00:33:45.940 Feature Identifiers & Effects Log Page:May Support 00:33:45.940 NVMe-MI Commands & Effects Log Page: May Support 00:33:45.940 Data Area 4 for Telemetry Log: Not Supported 00:33:45.940 Error Log Page Entries Supported: 1 00:33:45.940 Keep Alive: Not Supported 00:33:45.940 00:33:45.940 NVM Command Set Attributes 00:33:45.940 ========================== 00:33:45.940 Submission Queue Entry Size 00:33:45.940 Max: 1 00:33:45.940 Min: 1 00:33:45.940 Completion Queue Entry Size 00:33:45.940 Max: 1 00:33:45.940 Min: 1 00:33:45.940 Number of Namespaces: 0 00:33:45.940 Compare Command: Not Supported 00:33:45.940 Write Uncorrectable Command: Not Supported 00:33:45.940 Dataset Management Command: Not Supported 00:33:45.940 Write Zeroes Command: Not Supported 00:33:45.940 Set Features Save Field: Not Supported 00:33:45.940 Reservations: Not Supported 00:33:45.940 Timestamp: Not Supported 00:33:45.940 Copy: Not Supported 00:33:45.940 Volatile Write Cache: Not Present 00:33:45.940 Atomic Write Unit (Normal): 1 00:33:45.940 Atomic Write Unit (PFail): 1 00:33:45.940 Atomic Compare & Write Unit: 1 00:33:45.940 Fused Compare & Write: Not Supported 00:33:45.940 Scatter-Gather List 00:33:45.940 SGL Command Set: Supported 00:33:45.940 SGL Keyed: Not Supported 00:33:45.940 SGL Bit Bucket Descriptor: Not Supported 00:33:45.940 SGL Metadata Pointer: Not Supported 00:33:45.940 Oversized SGL: Not Supported 00:33:45.940 SGL Metadata Address: Not Supported 00:33:45.940 SGL Offset: Supported 00:33:45.940 Transport SGL Data Block: Not Supported 00:33:45.940 Replay Protected Memory Block: Not Supported 00:33:45.940 00:33:45.940 Firmware Slot Information 00:33:45.940 ========================= 00:33:45.940 Active slot: 0 00:33:45.940 00:33:45.940 00:33:45.940 Error Log 00:33:45.940 ========= 00:33:45.940 00:33:45.940 Active Namespaces 00:33:45.940 ================= 00:33:45.940 Discovery Log Page 00:33:45.940 ================== 00:33:45.940 Generation Counter: 2 00:33:45.940 Number of Records: 2 00:33:45.940 Record Format: 0 00:33:45.940 00:33:45.940 Discovery Log Entry 0 00:33:45.940 ---------------------- 00:33:45.940 Transport Type: 3 (TCP) 00:33:45.940 Address Family: 1 (IPv4) 00:33:45.940 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:45.940 Entry Flags: 00:33:45.940 Duplicate Returned Information: 0 00:33:45.940 Explicit Persistent Connection Support for Discovery: 0 00:33:45.940 Transport Requirements: 00:33:45.940 Secure Channel: Not Specified 00:33:45.940 Port ID: 1 (0x0001) 00:33:45.940 Controller ID: 65535 (0xffff) 00:33:45.940 Admin Max SQ Size: 32 00:33:45.940 Transport Service Identifier: 4420 00:33:45.940 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:45.940 Transport Address: 10.0.0.1 00:33:45.940 Discovery Log Entry 1 00:33:45.940 ---------------------- 00:33:45.940 Transport Type: 3 (TCP) 00:33:45.940 Address Family: 1 (IPv4) 00:33:45.940 Subsystem Type: 2 (NVM Subsystem) 00:33:45.940 Entry Flags: 00:33:45.940 Duplicate Returned Information: 0 00:33:45.940 Explicit Persistent Connection Support for Discovery: 0 00:33:45.940 Transport Requirements: 00:33:45.940 Secure Channel: Not Specified 00:33:45.940 Port ID: 1 (0x0001) 00:33:45.940 Controller ID: 65535 (0xffff) 00:33:45.940 Admin Max SQ Size: 32 00:33:45.940 Transport Service Identifier: 4420 00:33:45.940 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:45.940 Transport Address: 10.0.0.1 00:33:45.940 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:46.200 get_feature(0x01) failed 00:33:46.200 get_feature(0x02) failed 00:33:46.200 get_feature(0x04) failed 00:33:46.200 ===================================================== 00:33:46.200 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:46.200 ===================================================== 00:33:46.200 Controller Capabilities/Features 00:33:46.200 ================================ 00:33:46.200 Vendor ID: 0000 00:33:46.200 Subsystem Vendor ID: 0000 00:33:46.200 Serial Number: d733e9fbb6106f8eca13 00:33:46.200 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:46.200 Firmware Version: 6.8.9-20 00:33:46.200 Recommended Arb Burst: 6 00:33:46.200 IEEE OUI Identifier: 00 00 00 00:33:46.200 Multi-path I/O 00:33:46.200 May have multiple subsystem ports: Yes 00:33:46.200 May have multiple controllers: Yes 00:33:46.201 Associated with SR-IOV VF: No 00:33:46.201 Max Data Transfer Size: Unlimited 00:33:46.201 Max Number of Namespaces: 1024 00:33:46.201 Max Number of I/O Queues: 128 00:33:46.201 NVMe Specification Version (VS): 1.3 00:33:46.201 NVMe Specification Version (Identify): 1.3 00:33:46.201 Maximum Queue Entries: 1024 00:33:46.201 Contiguous Queues Required: No 00:33:46.201 Arbitration Mechanisms Supported 00:33:46.201 Weighted Round Robin: Not Supported 00:33:46.201 Vendor Specific: Not Supported 00:33:46.201 Reset Timeout: 7500 ms 00:33:46.201 Doorbell Stride: 4 bytes 00:33:46.201 NVM Subsystem Reset: Not Supported 00:33:46.201 Command Sets Supported 00:33:46.201 NVM Command Set: Supported 00:33:46.201 Boot Partition: Not Supported 00:33:46.201 Memory Page Size Minimum: 4096 bytes 00:33:46.201 Memory Page Size Maximum: 4096 bytes 00:33:46.201 Persistent Memory Region: Not Supported 00:33:46.201 Optional Asynchronous Events Supported 00:33:46.201 Namespace Attribute Notices: Supported 00:33:46.201 Firmware Activation Notices: Not Supported 00:33:46.201 ANA Change Notices: Supported 00:33:46.201 PLE Aggregate Log Change Notices: Not Supported 00:33:46.201 LBA Status Info Alert Notices: Not Supported 00:33:46.201 EGE Aggregate Log Change Notices: Not Supported 00:33:46.201 Normal NVM Subsystem Shutdown event: Not Supported 00:33:46.201 Zone Descriptor Change Notices: Not Supported 00:33:46.201 Discovery Log Change Notices: Not Supported 00:33:46.201 Controller Attributes 00:33:46.201 128-bit Host Identifier: Supported 00:33:46.201 Non-Operational Permissive Mode: Not Supported 00:33:46.201 NVM Sets: Not Supported 00:33:46.201 Read Recovery Levels: Not Supported 00:33:46.201 Endurance Groups: Not Supported 00:33:46.201 Predictable Latency Mode: Not Supported 00:33:46.201 Traffic Based Keep ALive: Supported 00:33:46.201 Namespace Granularity: Not Supported 00:33:46.201 SQ Associations: Not Supported 00:33:46.201 UUID List: Not Supported 00:33:46.201 Multi-Domain Subsystem: Not Supported 00:33:46.201 Fixed Capacity Management: Not Supported 00:33:46.201 Variable Capacity Management: Not Supported 00:33:46.201 Delete Endurance Group: Not Supported 00:33:46.201 Delete NVM Set: Not Supported 00:33:46.201 Extended LBA Formats Supported: Not Supported 00:33:46.201 Flexible Data Placement Supported: Not Supported 00:33:46.201 00:33:46.201 Controller Memory Buffer Support 00:33:46.201 ================================ 00:33:46.201 Supported: No 00:33:46.201 00:33:46.201 Persistent Memory Region Support 00:33:46.201 ================================ 00:33:46.201 Supported: No 00:33:46.201 00:33:46.201 Admin Command Set Attributes 00:33:46.201 ============================ 00:33:46.201 Security Send/Receive: Not Supported 00:33:46.201 Format NVM: Not Supported 00:33:46.201 Firmware Activate/Download: Not Supported 00:33:46.201 Namespace Management: Not Supported 00:33:46.201 Device Self-Test: Not Supported 00:33:46.201 Directives: Not Supported 00:33:46.201 NVMe-MI: Not Supported 00:33:46.201 Virtualization Management: Not Supported 00:33:46.201 Doorbell Buffer Config: Not Supported 00:33:46.201 Get LBA Status Capability: Not Supported 00:33:46.201 Command & Feature Lockdown Capability: Not Supported 00:33:46.201 Abort Command Limit: 4 00:33:46.201 Async Event Request Limit: 4 00:33:46.201 Number of Firmware Slots: N/A 00:33:46.201 Firmware Slot 1 Read-Only: N/A 00:33:46.201 Firmware Activation Without Reset: N/A 00:33:46.201 Multiple Update Detection Support: N/A 00:33:46.201 Firmware Update Granularity: No Information Provided 00:33:46.201 Per-Namespace SMART Log: Yes 00:33:46.201 Asymmetric Namespace Access Log Page: Supported 00:33:46.201 ANA Transition Time : 10 sec 00:33:46.201 00:33:46.201 Asymmetric Namespace Access Capabilities 00:33:46.201 ANA Optimized State : Supported 00:33:46.201 ANA Non-Optimized State : Supported 00:33:46.201 ANA Inaccessible State : Supported 00:33:46.201 ANA Persistent Loss State : Supported 00:33:46.201 ANA Change State : Supported 00:33:46.201 ANAGRPID is not changed : No 00:33:46.201 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:46.201 00:33:46.201 ANA Group Identifier Maximum : 128 00:33:46.201 Number of ANA Group Identifiers : 128 00:33:46.201 Max Number of Allowed Namespaces : 1024 00:33:46.201 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:46.201 Command Effects Log Page: Supported 00:33:46.201 Get Log Page Extended Data: Supported 00:33:46.201 Telemetry Log Pages: Not Supported 00:33:46.201 Persistent Event Log Pages: Not Supported 00:33:46.201 Supported Log Pages Log Page: May Support 00:33:46.201 Commands Supported & Effects Log Page: Not Supported 00:33:46.201 Feature Identifiers & Effects Log Page:May Support 00:33:46.201 NVMe-MI Commands & Effects Log Page: May Support 00:33:46.201 Data Area 4 for Telemetry Log: Not Supported 00:33:46.201 Error Log Page Entries Supported: 128 00:33:46.201 Keep Alive: Supported 00:33:46.201 Keep Alive Granularity: 1000 ms 00:33:46.201 00:33:46.201 NVM Command Set Attributes 00:33:46.201 ========================== 00:33:46.201 Submission Queue Entry Size 00:33:46.201 Max: 64 00:33:46.201 Min: 64 00:33:46.201 Completion Queue Entry Size 00:33:46.201 Max: 16 00:33:46.201 Min: 16 00:33:46.201 Number of Namespaces: 1024 00:33:46.201 Compare Command: Not Supported 00:33:46.201 Write Uncorrectable Command: Not Supported 00:33:46.201 Dataset Management Command: Supported 00:33:46.201 Write Zeroes Command: Supported 00:33:46.201 Set Features Save Field: Not Supported 00:33:46.201 Reservations: Not Supported 00:33:46.201 Timestamp: Not Supported 00:33:46.201 Copy: Not Supported 00:33:46.201 Volatile Write Cache: Present 00:33:46.201 Atomic Write Unit (Normal): 1 00:33:46.201 Atomic Write Unit (PFail): 1 00:33:46.201 Atomic Compare & Write Unit: 1 00:33:46.201 Fused Compare & Write: Not Supported 00:33:46.201 Scatter-Gather List 00:33:46.201 SGL Command Set: Supported 00:33:46.201 SGL Keyed: Not Supported 00:33:46.201 SGL Bit Bucket Descriptor: Not Supported 00:33:46.201 SGL Metadata Pointer: Not Supported 00:33:46.201 Oversized SGL: Not Supported 00:33:46.201 SGL Metadata Address: Not Supported 00:33:46.201 SGL Offset: Supported 00:33:46.201 Transport SGL Data Block: Not Supported 00:33:46.201 Replay Protected Memory Block: Not Supported 00:33:46.201 00:33:46.201 Firmware Slot Information 00:33:46.201 ========================= 00:33:46.201 Active slot: 0 00:33:46.201 00:33:46.201 Asymmetric Namespace Access 00:33:46.201 =========================== 00:33:46.201 Change Count : 0 00:33:46.201 Number of ANA Group Descriptors : 1 00:33:46.201 ANA Group Descriptor : 0 00:33:46.201 ANA Group ID : 1 00:33:46.201 Number of NSID Values : 1 00:33:46.201 Change Count : 0 00:33:46.201 ANA State : 1 00:33:46.201 Namespace Identifier : 1 00:33:46.201 00:33:46.201 Commands Supported and Effects 00:33:46.201 ============================== 00:33:46.201 Admin Commands 00:33:46.201 -------------- 00:33:46.201 Get Log Page (02h): Supported 00:33:46.201 Identify (06h): Supported 00:33:46.201 Abort (08h): Supported 00:33:46.201 Set Features (09h): Supported 00:33:46.201 Get Features (0Ah): Supported 00:33:46.201 Asynchronous Event Request (0Ch): Supported 00:33:46.201 Keep Alive (18h): Supported 00:33:46.201 I/O Commands 00:33:46.201 ------------ 00:33:46.201 Flush (00h): Supported 00:33:46.201 Write (01h): Supported LBA-Change 00:33:46.201 Read (02h): Supported 00:33:46.201 Write Zeroes (08h): Supported LBA-Change 00:33:46.201 Dataset Management (09h): Supported 00:33:46.201 00:33:46.201 Error Log 00:33:46.201 ========= 00:33:46.201 Entry: 0 00:33:46.201 Error Count: 0x3 00:33:46.201 Submission Queue Id: 0x0 00:33:46.201 Command Id: 0x5 00:33:46.201 Phase Bit: 0 00:33:46.201 Status Code: 0x2 00:33:46.201 Status Code Type: 0x0 00:33:46.201 Do Not Retry: 1 00:33:46.201 Error Location: 0x28 00:33:46.201 LBA: 0x0 00:33:46.201 Namespace: 0x0 00:33:46.201 Vendor Log Page: 0x0 00:33:46.201 ----------- 00:33:46.201 Entry: 1 00:33:46.201 Error Count: 0x2 00:33:46.201 Submission Queue Id: 0x0 00:33:46.201 Command Id: 0x5 00:33:46.201 Phase Bit: 0 00:33:46.201 Status Code: 0x2 00:33:46.201 Status Code Type: 0x0 00:33:46.201 Do Not Retry: 1 00:33:46.201 Error Location: 0x28 00:33:46.201 LBA: 0x0 00:33:46.201 Namespace: 0x0 00:33:46.201 Vendor Log Page: 0x0 00:33:46.201 ----------- 00:33:46.201 Entry: 2 00:33:46.202 Error Count: 0x1 00:33:46.202 Submission Queue Id: 0x0 00:33:46.202 Command Id: 0x4 00:33:46.202 Phase Bit: 0 00:33:46.202 Status Code: 0x2 00:33:46.202 Status Code Type: 0x0 00:33:46.202 Do Not Retry: 1 00:33:46.202 Error Location: 0x28 00:33:46.202 LBA: 0x0 00:33:46.202 Namespace: 0x0 00:33:46.202 Vendor Log Page: 0x0 00:33:46.202 00:33:46.202 Number of Queues 00:33:46.202 ================ 00:33:46.202 Number of I/O Submission Queues: 128 00:33:46.202 Number of I/O Completion Queues: 128 00:33:46.202 00:33:46.202 ZNS Specific Controller Data 00:33:46.202 ============================ 00:33:46.202 Zone Append Size Limit: 0 00:33:46.202 00:33:46.202 00:33:46.202 Active Namespaces 00:33:46.202 ================= 00:33:46.202 get_feature(0x05) failed 00:33:46.202 Namespace ID:1 00:33:46.202 Command Set Identifier: NVM (00h) 00:33:46.202 Deallocate: Supported 00:33:46.202 Deallocated/Unwritten Error: Not Supported 00:33:46.202 Deallocated Read Value: Unknown 00:33:46.202 Deallocate in Write Zeroes: Not Supported 00:33:46.202 Deallocated Guard Field: 0xFFFF 00:33:46.202 Flush: Supported 00:33:46.202 Reservation: Not Supported 00:33:46.202 Namespace Sharing Capabilities: Multiple Controllers 00:33:46.202 Size (in LBAs): 1953525168 (931GiB) 00:33:46.202 Capacity (in LBAs): 1953525168 (931GiB) 00:33:46.202 Utilization (in LBAs): 1953525168 (931GiB) 00:33:46.202 UUID: dff20a31-9c2d-47fa-9b30-d822d0eafac3 00:33:46.202 Thin Provisioning: Not Supported 00:33:46.202 Per-NS Atomic Units: Yes 00:33:46.202 Atomic Boundary Size (Normal): 0 00:33:46.202 Atomic Boundary Size (PFail): 0 00:33:46.202 Atomic Boundary Offset: 0 00:33:46.202 NGUID/EUI64 Never Reused: No 00:33:46.202 ANA group ID: 1 00:33:46.202 Namespace Write Protected: No 00:33:46.202 Number of LBA Formats: 1 00:33:46.202 Current LBA Format: LBA Format #00 00:33:46.202 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:46.202 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:46.202 rmmod nvme_tcp 00:33:46.202 rmmod nvme_fabrics 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.202 23:58:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.105 23:58:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:48.105 23:58:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:48.105 23:58:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:48.105 23:58:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:33:48.105 23:58:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:48.105 23:58:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:48.105 23:58:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:48.105 23:58:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:48.105 23:58:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:48.105 23:58:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:48.391 23:58:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:49.793 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:49.793 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:49.793 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:49.793 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:49.793 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:49.793 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:49.793 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:49.793 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:49.793 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:49.793 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:49.793 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:49.793 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:49.793 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:49.793 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:49.793 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:49.793 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:50.727 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:50.727 00:33:50.727 real 0m9.685s 00:33:50.727 user 0m2.100s 00:33:50.727 sys 0m3.604s 00:33:50.727 23:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:50.727 23:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.727 ************************************ 00:33:50.727 END TEST nvmf_identify_kernel_target 00:33:50.727 ************************************ 00:33:50.727 23:58:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:50.727 23:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:50.727 23:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:50.727 23:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.727 ************************************ 00:33:50.727 START TEST nvmf_auth_host 00:33:50.727 ************************************ 00:33:50.727 23:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:50.727 * Looking for test storage... 00:33:50.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:50.727 23:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:50.727 23:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:33:50.727 23:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:50.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.986 --rc genhtml_branch_coverage=1 00:33:50.986 --rc genhtml_function_coverage=1 00:33:50.986 --rc genhtml_legend=1 00:33:50.986 --rc geninfo_all_blocks=1 00:33:50.986 --rc geninfo_unexecuted_blocks=1 00:33:50.986 00:33:50.986 ' 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:50.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.986 --rc genhtml_branch_coverage=1 00:33:50.986 --rc genhtml_function_coverage=1 00:33:50.986 --rc genhtml_legend=1 00:33:50.986 --rc geninfo_all_blocks=1 00:33:50.986 --rc geninfo_unexecuted_blocks=1 00:33:50.986 00:33:50.986 ' 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:50.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.986 --rc genhtml_branch_coverage=1 00:33:50.986 --rc genhtml_function_coverage=1 00:33:50.986 --rc genhtml_legend=1 00:33:50.986 --rc geninfo_all_blocks=1 00:33:50.986 --rc geninfo_unexecuted_blocks=1 00:33:50.986 00:33:50.986 ' 00:33:50.986 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:50.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.986 --rc genhtml_branch_coverage=1 00:33:50.986 --rc genhtml_function_coverage=1 00:33:50.986 --rc genhtml_legend=1 00:33:50.986 --rc geninfo_all_blocks=1 00:33:50.987 --rc geninfo_unexecuted_blocks=1 00:33:50.987 00:33:50.987 ' 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:50.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:50.987 23:58:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.889 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:52.890 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:52.890 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:52.890 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:52.890 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:52.890 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:53.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:33:53.149 00:33:53.149 --- 10.0.0.2 ping statistics --- 00:33:53.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.149 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:53.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:33:53.149 00:33:53.149 --- 10.0.0.1 ping statistics --- 00:33:53.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.149 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=322034 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 322034 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 322034 ']' 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:53.149 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9c76a5cfd57e811ed1c581456dc56347 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nMy 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9c76a5cfd57e811ed1c581456dc56347 0 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9c76a5cfd57e811ed1c581456dc56347 0 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9c76a5cfd57e811ed1c581456dc56347 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nMy 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nMy 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.nMy 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1bef3fcf192be24a0f8c5e3703db414865f3540812cf2c6adb105f812f4a3bd8 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1BM 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1bef3fcf192be24a0f8c5e3703db414865f3540812cf2c6adb105f812f4a3bd8 3 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1bef3fcf192be24a0f8c5e3703db414865f3540812cf2c6adb105f812f4a3bd8 3 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1bef3fcf192be24a0f8c5e3703db414865f3540812cf2c6adb105f812f4a3bd8 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:53.408 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1BM 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1BM 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.1BM 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dc3d1065a3e8fc8244953f8fd7387265f0d93ed4a411266f 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eJQ 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dc3d1065a3e8fc8244953f8fd7387265f0d93ed4a411266f 0 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dc3d1065a3e8fc8244953f8fd7387265f0d93ed4a411266f 0 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dc3d1065a3e8fc8244953f8fd7387265f0d93ed4a411266f 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eJQ 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eJQ 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.eJQ 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c528c378624854538e067722089f38af69a40d0f4eef6d9e 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.mvI 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c528c378624854538e067722089f38af69a40d0f4eef6d9e 2 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c528c378624854538e067722089f38af69a40d0f4eef6d9e 2 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c528c378624854538e067722089f38af69a40d0f4eef6d9e 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.mvI 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.mvI 00:33:53.667 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.mvI 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=033fb15508cc385d8f9ccb39165852db 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.4if 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 033fb15508cc385d8f9ccb39165852db 1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 033fb15508cc385d8f9ccb39165852db 1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=033fb15508cc385d8f9ccb39165852db 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.4if 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.4if 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.4if 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f4cd06e819f0e451f0e02c9ae354c438 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.DHh 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f4cd06e819f0e451f0e02c9ae354c438 1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f4cd06e819f0e451f0e02c9ae354c438 1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f4cd06e819f0e451f0e02c9ae354c438 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.DHh 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.DHh 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.DHh 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ed5fd3afa8c1319749cfe3aafa77b9ce6f9eea3baf394470 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LD1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ed5fd3afa8c1319749cfe3aafa77b9ce6f9eea3baf394470 2 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ed5fd3afa8c1319749cfe3aafa77b9ce6f9eea3baf394470 2 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ed5fd3afa8c1319749cfe3aafa77b9ce6f9eea3baf394470 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LD1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LD1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.LD1 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:53.668 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:53.926 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:53.926 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2fbbcc5c0d3899008e06437b175b4bb3 00:33:53.926 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:53.926 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KUF 00:33:53.926 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2fbbcc5c0d3899008e06437b175b4bb3 0 00:33:53.926 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2fbbcc5c0d3899008e06437b175b4bb3 0 00:33:53.926 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:53.926 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:53.926 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2fbbcc5c0d3899008e06437b175b4bb3 00:33:53.926 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:53.926 23:58:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KUF 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KUF 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.KUF 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=785cca9b73e2036ceaa85391d90fb9807afdd1dcf11237ef57eee74fc8f7b32f 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8Cv 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 785cca9b73e2036ceaa85391d90fb9807afdd1dcf11237ef57eee74fc8f7b32f 3 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 785cca9b73e2036ceaa85391d90fb9807afdd1dcf11237ef57eee74fc8f7b32f 3 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=785cca9b73e2036ceaa85391d90fb9807afdd1dcf11237ef57eee74fc8f7b32f 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8Cv 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8Cv 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.8Cv 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 322034 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 322034 ']' 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:53.926 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nMy 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.1BM ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1BM 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.eJQ 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.mvI ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mvI 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.4if 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.DHh ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DHh 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.LD1 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.KUF ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.KUF 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.8Cv 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.185 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:54.186 23:58:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:55.559 Waiting for block devices as requested 00:33:55.559 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:55.559 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:55.817 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:55.817 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:55.817 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:56.075 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:56.075 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:56.075 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:56.075 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:56.333 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:56.333 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:56.333 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:56.333 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:56.590 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:56.590 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:56.590 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:56.590 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:57.157 No valid GPT data, bailing 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:57.157 00:33:57.157 Discovery Log Number of Records 2, Generation counter 2 00:33:57.157 =====Discovery Log Entry 0====== 00:33:57.157 trtype: tcp 00:33:57.157 adrfam: ipv4 00:33:57.157 subtype: current discovery subsystem 00:33:57.157 treq: not specified, sq flow control disable supported 00:33:57.157 portid: 1 00:33:57.157 trsvcid: 4420 00:33:57.157 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:57.157 traddr: 10.0.0.1 00:33:57.157 eflags: none 00:33:57.157 sectype: none 00:33:57.157 =====Discovery Log Entry 1====== 00:33:57.157 trtype: tcp 00:33:57.157 adrfam: ipv4 00:33:57.157 subtype: nvme subsystem 00:33:57.157 treq: not specified, sq flow control disable supported 00:33:57.157 portid: 1 00:33:57.157 trsvcid: 4420 00:33:57.157 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:57.157 traddr: 10.0.0.1 00:33:57.157 eflags: none 00:33:57.157 sectype: none 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.157 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.158 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.158 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.158 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.158 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.158 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.158 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.158 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.158 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.416 nvme0n1 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.416 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.417 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.675 nvme0n1 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.675 23:58:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.934 nvme0n1 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.934 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.193 nvme0n1 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.193 nvme0n1 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.193 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.452 nvme0n1 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.452 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.711 nvme0n1 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.711 23:58:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.711 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.711 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.711 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.711 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.970 nvme0n1 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.970 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.228 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.228 nvme0n1 00:33:59.229 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.229 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.229 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.229 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.229 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.229 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.229 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.229 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.229 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.229 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.487 nvme0n1 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.487 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.746 23:58:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.746 nvme0n1 00:33:59.746 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.746 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.746 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.746 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.746 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.746 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.746 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.746 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.746 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.746 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.004 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.261 nvme0n1 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:00.261 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.262 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.520 nvme0n1 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.520 23:58:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.777 nvme0n1 00:34:00.777 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.777 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.777 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.777 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.777 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.035 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.293 nvme0n1 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.293 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.550 nvme0n1 00:34:01.550 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.550 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.550 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.551 23:58:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.115 nvme0n1 00:34:02.115 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.115 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.115 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.115 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.115 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.115 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.115 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.115 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.115 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.115 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.373 23:58:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.939 nvme0n1 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.939 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.506 nvme0n1 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.506 23:58:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.074 nvme0n1 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.074 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.641 nvme0n1 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.641 23:58:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.575 nvme0n1 00:34:05.575 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.833 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.834 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.834 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.834 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.834 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.834 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.834 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.834 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.834 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.834 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.834 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:05.834 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.834 23:58:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.767 nvme0n1 00:34:06.767 23:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.767 23:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.767 23:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.767 23:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.767 23:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.767 23:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.767 23:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.767 23:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.767 23:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.767 23:58:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.768 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.702 nvme0n1 00:34:07.702 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.702 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.702 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.702 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.702 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.702 23:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.960 23:58:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.895 nvme0n1 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.895 23:58:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.831 nvme0n1 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.831 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.090 nvme0n1 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.090 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.091 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.350 nvme0n1 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.350 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.609 nvme0n1 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.609 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.868 nvme0n1 00:34:10.868 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.868 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.868 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.868 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.868 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.868 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.868 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.868 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.868 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.868 23:58:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.868 nvme0n1 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.868 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.127 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.411 nvme0n1 00:34:11.411 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.411 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.411 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.411 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.411 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.411 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.411 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.411 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.411 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.411 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.411 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.412 nvme0n1 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.412 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.673 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.673 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.673 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.673 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.673 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.673 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.673 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.673 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.673 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.673 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:11.673 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.674 nvme0n1 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.674 23:58:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.932 nvme0n1 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.932 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.191 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.192 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.192 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.192 nvme0n1 00:34:12.192 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.192 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.192 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.192 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.192 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.450 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.709 nvme0n1 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.709 23:58:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.968 nvme0n1 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.968 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.969 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.969 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.969 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.969 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.969 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.228 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.228 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.228 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.228 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.228 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.228 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:13.228 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.228 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.487 nvme0n1 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.487 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.746 nvme0n1 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.746 23:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.746 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.004 nvme0n1 00:34:14.004 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.004 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.004 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.004 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.004 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.004 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:14.262 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.263 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.830 nvme0n1 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.830 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.831 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.831 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.831 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.831 23:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.394 nvme0n1 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.394 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.961 nvme0n1 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.961 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 nvme0n1 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.528 23:58:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 nvme0n1 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:17.095 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.096 23:58:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.470 nvme0n1 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.470 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.471 23:58:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.404 nvme0n1 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.404 23:58:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.338 nvme0n1 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:20.338 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.339 23:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.273 nvme0n1 00:34:21.273 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.273 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.273 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.273 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.273 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.273 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.273 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.273 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.273 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.273 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.531 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.532 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.532 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.532 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.532 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.532 23:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.465 nvme0n1 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.465 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.724 nvme0n1 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.724 23:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.983 nvme0n1 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.983 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.984 nvme0n1 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.984 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.242 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.243 nvme0n1 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.243 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.501 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.501 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.501 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.501 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.501 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.501 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.501 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:23.501 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.501 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.501 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:23.501 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.501 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.502 nvme0n1 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.502 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.760 23:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.760 nvme0n1 00:34:23.760 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.760 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.760 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.760 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.760 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.760 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.760 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.760 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.760 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.760 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.760 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.019 nvme0n1 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.019 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.277 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.278 nvme0n1 00:34:24.278 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.278 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.278 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.278 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.278 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.278 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.536 nvme0n1 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.536 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.795 23:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.795 nvme0n1 00:34:24.795 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.795 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.795 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.795 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.795 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.795 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.053 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.311 nvme0n1 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.311 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.569 nvme0n1 00:34:25.569 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.569 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.569 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.569 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.569 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.569 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.569 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.569 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.569 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.569 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.569 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.828 23:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.086 nvme0n1 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.086 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.345 nvme0n1 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.345 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.911 nvme0n1 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:26.911 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.912 23:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.912 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.912 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.478 nvme0n1 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.478 23:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.044 nvme0n1 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:28.044 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.045 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.611 nvme0n1 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.611 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.612 23:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.178 nvme0n1 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.178 23:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.744 nvme0n1 00:34:29.744 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.744 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.744 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.744 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.744 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.744 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM3NmE1Y2ZkNTdlODExZWQxYzU4MTQ1NmRjNTYzNDeuxOYG: 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: ]] 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWJlZjNmY2YxOTJiZTI0YTBmOGM1ZTM3MDNkYjQxNDg2NWYzNTQwODEyY2YyYzZhZGIxMDVmODEyZjRhM2JkOKPC2CA=: 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.002 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.003 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.003 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.003 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.003 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.003 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.003 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.003 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.003 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.003 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.003 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:30.003 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.003 23:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.937 nvme0n1 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.937 23:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.872 nvme0n1 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.872 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.873 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:31.873 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.873 23:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.806 nvme0n1 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:32.806 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWQ1ZmQzYWZhOGMxMzE5NzQ5Y2ZlM2FhZmE3N2I5Y2U2ZjllZWEzYmFmMzk0NDcwvIMvlA==: 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: ]] 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZiYmNjNWMwZDM4OTkwMDhlMDY0MzdiMTc1YjRiYjNIPFDH: 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.807 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.065 23:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.999 nvme0n1 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzg1Y2NhOWI3M2UyMDM2Y2VhYTg1MzkxZDkwZmI5ODA3YWZkZDFkY2YxMTIzN2VmNTdlZWU3NGZjOGY3YjMyZn9799Y=: 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.999 23:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.934 nvme0n1 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.934 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.192 request: 00:34:35.192 { 00:34:35.192 "name": "nvme0", 00:34:35.192 "trtype": "tcp", 00:34:35.192 "traddr": "10.0.0.1", 00:34:35.192 "adrfam": "ipv4", 00:34:35.192 "trsvcid": "4420", 00:34:35.192 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:35.192 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:35.192 "prchk_reftag": false, 00:34:35.192 "prchk_guard": false, 00:34:35.192 "hdgst": false, 00:34:35.192 "ddgst": false, 00:34:35.192 "allow_unrecognized_csi": false, 00:34:35.192 "method": "bdev_nvme_attach_controller", 00:34:35.192 "req_id": 1 00:34:35.192 } 00:34:35.192 Got JSON-RPC error response 00:34:35.192 response: 00:34:35.192 { 00:34:35.192 "code": -5, 00:34:35.192 "message": "Input/output error" 00:34:35.192 } 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.192 request: 00:34:35.192 { 00:34:35.192 "name": "nvme0", 00:34:35.192 "trtype": "tcp", 00:34:35.192 "traddr": "10.0.0.1", 00:34:35.192 "adrfam": "ipv4", 00:34:35.192 "trsvcid": "4420", 00:34:35.192 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:35.192 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:35.192 "prchk_reftag": false, 00:34:35.192 "prchk_guard": false, 00:34:35.192 "hdgst": false, 00:34:35.192 "ddgst": false, 00:34:35.192 "dhchap_key": "key2", 00:34:35.192 "allow_unrecognized_csi": false, 00:34:35.192 "method": "bdev_nvme_attach_controller", 00:34:35.192 "req_id": 1 00:34:35.192 } 00:34:35.192 Got JSON-RPC error response 00:34:35.192 response: 00:34:35.192 { 00:34:35.192 "code": -5, 00:34:35.192 "message": "Input/output error" 00:34:35.192 } 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.192 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.193 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.450 request: 00:34:35.450 { 00:34:35.450 "name": "nvme0", 00:34:35.450 "trtype": "tcp", 00:34:35.450 "traddr": "10.0.0.1", 00:34:35.450 "adrfam": "ipv4", 00:34:35.450 "trsvcid": "4420", 00:34:35.450 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:35.450 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:35.450 "prchk_reftag": false, 00:34:35.450 "prchk_guard": false, 00:34:35.450 "hdgst": false, 00:34:35.450 "ddgst": false, 00:34:35.450 "dhchap_key": "key1", 00:34:35.450 "dhchap_ctrlr_key": "ckey2", 00:34:35.450 "allow_unrecognized_csi": false, 00:34:35.450 "method": "bdev_nvme_attach_controller", 00:34:35.450 "req_id": 1 00:34:35.450 } 00:34:35.450 Got JSON-RPC error response 00:34:35.450 response: 00:34:35.450 { 00:34:35.450 "code": -5, 00:34:35.450 "message": "Input/output error" 00:34:35.450 } 00:34:35.450 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:35.450 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:35.450 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:35.450 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:35.450 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:35.450 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:35.450 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.450 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.450 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.450 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.450 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.451 nvme0n1 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.451 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.709 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.709 request: 00:34:35.709 { 00:34:35.709 "name": "nvme0", 00:34:35.709 "dhchap_key": "key1", 00:34:35.709 "dhchap_ctrlr_key": "ckey2", 00:34:35.709 "method": "bdev_nvme_set_keys", 00:34:35.709 "req_id": 1 00:34:35.709 } 00:34:35.709 Got JSON-RPC error response 00:34:35.709 response: 00:34:35.709 { 00:34:35.709 "code": -13, 00:34:35.709 "message": "Permission denied" 00:34:35.709 } 00:34:35.710 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:35.710 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:35.710 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:35.710 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:35.710 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:35.710 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.710 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:35.710 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.710 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.710 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.710 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:35.710 23:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:36.648 23:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.648 23:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:36.648 23:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.648 23:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.951 23:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.951 23:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:36.951 23:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:37.910 23:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.910 23:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:37.910 23:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.910 23:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGMzZDEwNjVhM2U4ZmM4MjQ0OTUzZjhmZDczODcyNjVmMGQ5M2VkNGE0MTEyNjZmjWQ9Pw==: 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: ]] 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzUyOGMzNzg2MjQ4NTQ1MzhlMDY3NzIyMDg5ZjM4YWY2OWE0MGQwZjRlZWY2ZDllpXj4Gw==: 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.910 nvme0n1 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDMzZmIxNTUwOGNjMzg1ZDhmOWNjYjM5MTY1ODUyZGJkSX+y: 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: ]] 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjRjZDA2ZTgxOWYwZTQ1MWYwZTAyYzlhZTM1NGM0Mzh10O6j: 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.910 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.168 request: 00:34:38.168 { 00:34:38.168 "name": "nvme0", 00:34:38.168 "dhchap_key": "key2", 00:34:38.168 "dhchap_ctrlr_key": "ckey1", 00:34:38.168 "method": "bdev_nvme_set_keys", 00:34:38.168 "req_id": 1 00:34:38.168 } 00:34:38.168 Got JSON-RPC error response 00:34:38.168 response: 00:34:38.168 { 00:34:38.168 "code": -13, 00:34:38.168 "message": "Permission denied" 00:34:38.168 } 00:34:38.168 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:38.168 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:38.168 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:38.168 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:38.168 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:38.168 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.168 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:38.168 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.168 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.168 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.168 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:38.168 23:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:39.101 rmmod nvme_tcp 00:34:39.101 rmmod nvme_fabrics 00:34:39.101 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 322034 ']' 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 322034 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 322034 ']' 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 322034 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 322034 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 322034' 00:34:39.360 killing process with pid 322034 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 322034 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 322034 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.360 23:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:41.895 23:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:42.829 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:42.829 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:42.829 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:42.829 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:42.829 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:42.829 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:42.829 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:42.829 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:42.829 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:42.829 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:42.829 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:42.829 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:43.087 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:43.087 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:43.087 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:43.087 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:44.023 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:44.023 23:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.nMy /tmp/spdk.key-null.eJQ /tmp/spdk.key-sha256.4if /tmp/spdk.key-sha384.LD1 /tmp/spdk.key-sha512.8Cv /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:44.023 23:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:45.398 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:45.398 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:45.398 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:45.398 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:45.398 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:45.398 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:45.398 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:45.398 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:45.398 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:45.398 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:45.398 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:45.398 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:45.398 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:45.398 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:45.398 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:45.398 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:45.398 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:45.398 00:34:45.398 real 0m54.547s 00:34:45.398 user 0m52.442s 00:34:45.398 sys 0m6.171s 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.398 ************************************ 00:34:45.398 END TEST nvmf_auth_host 00:34:45.398 ************************************ 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.398 ************************************ 00:34:45.398 START TEST nvmf_digest 00:34:45.398 ************************************ 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:45.398 * Looking for test storage... 00:34:45.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:45.398 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:45.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.399 --rc genhtml_branch_coverage=1 00:34:45.399 --rc genhtml_function_coverage=1 00:34:45.399 --rc genhtml_legend=1 00:34:45.399 --rc geninfo_all_blocks=1 00:34:45.399 --rc geninfo_unexecuted_blocks=1 00:34:45.399 00:34:45.399 ' 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:45.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.399 --rc genhtml_branch_coverage=1 00:34:45.399 --rc genhtml_function_coverage=1 00:34:45.399 --rc genhtml_legend=1 00:34:45.399 --rc geninfo_all_blocks=1 00:34:45.399 --rc geninfo_unexecuted_blocks=1 00:34:45.399 00:34:45.399 ' 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:45.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.399 --rc genhtml_branch_coverage=1 00:34:45.399 --rc genhtml_function_coverage=1 00:34:45.399 --rc genhtml_legend=1 00:34:45.399 --rc geninfo_all_blocks=1 00:34:45.399 --rc geninfo_unexecuted_blocks=1 00:34:45.399 00:34:45.399 ' 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:45.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.399 --rc genhtml_branch_coverage=1 00:34:45.399 --rc genhtml_function_coverage=1 00:34:45.399 --rc genhtml_legend=1 00:34:45.399 --rc geninfo_all_blocks=1 00:34:45.399 --rc geninfo_unexecuted_blocks=1 00:34:45.399 00:34:45.399 ' 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:45.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:45.399 23:59:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:47.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:47.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:47.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:47.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:47.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:34:47.933 00:34:47.933 --- 10.0.0.2 ping statistics --- 00:34:47.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.933 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:34:47.933 00:34:47.933 --- 10.0.0.1 ping statistics --- 00:34:47.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.933 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:47.933 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:47.934 ************************************ 00:34:47.934 START TEST nvmf_digest_clean 00:34:47.934 ************************************ 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=332026 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 332026 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 332026 ']' 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:47.934 23:59:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:47.934 [2024-11-19 23:59:21.936648] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:34:47.934 [2024-11-19 23:59:21.936724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.934 [2024-11-19 23:59:22.008171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.934 [2024-11-19 23:59:22.053003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.934 [2024-11-19 23:59:22.053053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.934 [2024-11-19 23:59:22.053097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:47.934 [2024-11-19 23:59:22.053111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:47.934 [2024-11-19 23:59:22.053121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.934 [2024-11-19 23:59:22.053688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.934 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.934 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:47.934 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:47.934 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:47.934 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:47.934 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.934 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:47.934 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:47.934 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:47.934 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.934 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:48.193 null0 00:34:48.193 [2024-11-19 23:59:22.291075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.193 [2024-11-19 23:59:22.315307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=332050 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 332050 /var/tmp/bperf.sock 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 332050 ']' 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:48.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:48.193 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:48.193 [2024-11-19 23:59:22.364698] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:34:48.193 [2024-11-19 23:59:22.364767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332050 ] 00:34:48.193 [2024-11-19 23:59:22.434257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.193 [2024-11-19 23:59:22.482278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.451 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:48.451 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:48.451 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:48.451 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:48.451 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:48.710 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:48.710 23:59:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:49.276 nvme0n1 00:34:49.276 23:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:49.276 23:59:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:49.276 Running I/O for 2 seconds... 00:34:51.588 17611.00 IOPS, 68.79 MiB/s [2024-11-19T22:59:25.900Z] 17755.50 IOPS, 69.36 MiB/s 00:34:51.588 Latency(us) 00:34:51.588 [2024-11-19T22:59:25.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.588 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:51.588 nvme0n1 : 2.01 17774.86 69.43 0.00 0.00 7191.74 3519.53 16505.36 00:34:51.588 [2024-11-19T22:59:25.900Z] =================================================================================================================== 00:34:51.588 [2024-11-19T22:59:25.900Z] Total : 17774.86 69.43 0.00 0.00 7191.74 3519.53 16505.36 00:34:51.588 { 00:34:51.588 "results": [ 00:34:51.588 { 00:34:51.588 "job": "nvme0n1", 00:34:51.588 "core_mask": "0x2", 00:34:51.588 "workload": "randread", 00:34:51.588 "status": "finished", 00:34:51.588 "queue_depth": 128, 00:34:51.588 "io_size": 4096, 00:34:51.588 "runtime": 2.006036, 00:34:51.588 "iops": 17774.85548614282, 00:34:51.588 "mibps": 69.43302924274539, 00:34:51.588 "io_failed": 0, 00:34:51.588 "io_timeout": 0, 00:34:51.588 "avg_latency_us": 7191.73596898017, 00:34:51.588 "min_latency_us": 3519.525925925926, 00:34:51.588 "max_latency_us": 16505.36296296296 00:34:51.588 } 00:34:51.588 ], 00:34:51.588 "core_count": 1 00:34:51.588 } 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:51.588 | select(.opcode=="crc32c") 00:34:51.588 | "\(.module_name) \(.executed)"' 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 332050 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 332050 ']' 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 332050 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 332050 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 332050' 00:34:51.588 killing process with pid 332050 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 332050 00:34:51.588 Received shutdown signal, test time was about 2.000000 seconds 00:34:51.588 00:34:51.588 Latency(us) 00:34:51.588 [2024-11-19T22:59:25.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.588 [2024-11-19T22:59:25.900Z] =================================================================================================================== 00:34:51.588 [2024-11-19T22:59:25.900Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:51.588 23:59:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 332050 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=332460 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 332460 /var/tmp/bperf.sock 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 332460 ']' 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:51.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:51.847 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:51.847 [2024-11-19 23:59:26.049038] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:34:51.847 [2024-11-19 23:59:26.049137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332460 ] 00:34:51.847 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:51.847 Zero copy mechanism will not be used. 00:34:51.847 [2024-11-19 23:59:26.124163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.105 [2024-11-19 23:59:26.178801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.105 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:52.106 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:52.106 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:52.106 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:52.106 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:52.364 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.364 23:59:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.931 nvme0n1 00:34:52.931 23:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:52.931 23:59:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:52.931 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:52.931 Zero copy mechanism will not be used. 00:34:52.931 Running I/O for 2 seconds... 00:34:55.243 5560.00 IOPS, 695.00 MiB/s [2024-11-19T22:59:29.555Z] 5524.50 IOPS, 690.56 MiB/s 00:34:55.243 Latency(us) 00:34:55.243 [2024-11-19T22:59:29.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.243 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:55.243 nvme0n1 : 2.00 5523.29 690.41 0.00 0.00 2892.78 649.29 9029.40 00:34:55.243 [2024-11-19T22:59:29.555Z] =================================================================================================================== 00:34:55.243 [2024-11-19T22:59:29.555Z] Total : 5523.29 690.41 0.00 0.00 2892.78 649.29 9029.40 00:34:55.243 { 00:34:55.243 "results": [ 00:34:55.243 { 00:34:55.243 "job": "nvme0n1", 00:34:55.243 "core_mask": "0x2", 00:34:55.243 "workload": "randread", 00:34:55.243 "status": "finished", 00:34:55.243 "queue_depth": 16, 00:34:55.243 "io_size": 131072, 00:34:55.243 "runtime": 2.003335, 00:34:55.243 "iops": 5523.289914068291, 00:34:55.243 "mibps": 690.4112392585364, 00:34:55.243 "io_failed": 0, 00:34:55.243 "io_timeout": 0, 00:34:55.243 "avg_latency_us": 2892.782330940068, 00:34:55.243 "min_latency_us": 649.2918518518519, 00:34:55.243 "max_latency_us": 9029.404444444444 00:34:55.243 } 00:34:55.243 ], 00:34:55.243 "core_count": 1 00:34:55.243 } 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:55.243 | select(.opcode=="crc32c") 00:34:55.243 | "\(.module_name) \(.executed)"' 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 332460 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 332460 ']' 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 332460 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:55.243 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 332460 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 332460' 00:34:55.501 killing process with pid 332460 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 332460 00:34:55.501 Received shutdown signal, test time was about 2.000000 seconds 00:34:55.501 00:34:55.501 Latency(us) 00:34:55.501 [2024-11-19T22:59:29.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.501 [2024-11-19T22:59:29.813Z] =================================================================================================================== 00:34:55.501 [2024-11-19T22:59:29.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 332460 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=332975 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 332975 /var/tmp/bperf.sock 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 332975 ']' 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:55.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:55.501 23:59:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:55.759 [2024-11-19 23:59:29.819622] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:34:55.759 [2024-11-19 23:59:29.819695] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332975 ] 00:34:55.759 [2024-11-19 23:59:29.893267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.759 [2024-11-19 23:59:29.941802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.759 23:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:55.760 23:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:55.760 23:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:55.760 23:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:55.760 23:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:56.325 23:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.325 23:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.892 nvme0n1 00:34:56.892 23:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:56.892 23:59:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:56.892 Running I/O for 2 seconds... 00:34:58.763 19918.00 IOPS, 77.80 MiB/s [2024-11-19T22:59:33.075Z] 19223.00 IOPS, 75.09 MiB/s 00:34:58.763 Latency(us) 00:34:58.763 [2024-11-19T22:59:33.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.763 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:58.763 nvme0n1 : 2.01 19220.58 75.08 0.00 0.00 6644.22 2985.53 9806.13 00:34:58.763 [2024-11-19T22:59:33.075Z] =================================================================================================================== 00:34:58.763 [2024-11-19T22:59:33.075Z] Total : 19220.58 75.08 0.00 0.00 6644.22 2985.53 9806.13 00:34:58.763 { 00:34:58.763 "results": [ 00:34:58.763 { 00:34:58.763 "job": "nvme0n1", 00:34:58.763 "core_mask": "0x2", 00:34:58.763 "workload": "randwrite", 00:34:58.763 "status": "finished", 00:34:58.763 "queue_depth": 128, 00:34:58.763 "io_size": 4096, 00:34:58.763 "runtime": 2.008576, 00:34:58.763 "iops": 19220.582143767526, 00:34:58.763 "mibps": 75.0803989990919, 00:34:58.763 "io_failed": 0, 00:34:58.763 "io_timeout": 0, 00:34:58.763 "avg_latency_us": 6644.222509723109, 00:34:58.763 "min_latency_us": 2985.528888888889, 00:34:58.763 "max_latency_us": 9806.127407407408 00:34:58.763 } 00:34:58.763 ], 00:34:58.763 "core_count": 1 00:34:58.763 } 00:34:58.763 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:58.763 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:58.763 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:58.763 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:58.764 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:58.764 | select(.opcode=="crc32c") 00:34:58.764 | "\(.module_name) \(.executed)"' 00:34:59.024 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:59.025 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:59.025 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:59.025 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:59.025 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 332975 00:34:59.025 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 332975 ']' 00:34:59.025 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 332975 00:34:59.025 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:59.025 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:59.025 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 332975 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 332975' 00:34:59.283 killing process with pid 332975 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 332975 00:34:59.283 Received shutdown signal, test time was about 2.000000 seconds 00:34:59.283 00:34:59.283 Latency(us) 00:34:59.283 [2024-11-19T22:59:33.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.283 [2024-11-19T22:59:33.595Z] =================================================================================================================== 00:34:59.283 [2024-11-19T22:59:33.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 332975 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=333384 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 333384 /var/tmp/bperf.sock 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 333384 ']' 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:59.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.283 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:59.283 [2024-11-19 23:59:33.582975] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:34:59.283 [2024-11-19 23:59:33.583090] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333384 ] 00:34:59.283 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:59.283 Zero copy mechanism will not be used. 00:34:59.542 [2024-11-19 23:59:33.655202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.542 [2024-11-19 23:59:33.707283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.800 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:59.800 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:59.800 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:59.800 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:59.800 23:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:00.058 23:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:00.058 23:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:00.625 nvme0n1 00:35:00.625 23:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:00.625 23:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:00.625 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:00.625 Zero copy mechanism will not be used. 00:35:00.625 Running I/O for 2 seconds... 00:35:02.496 5638.00 IOPS, 704.75 MiB/s [2024-11-19T22:59:36.808Z] 5526.00 IOPS, 690.75 MiB/s 00:35:02.496 Latency(us) 00:35:02.496 [2024-11-19T22:59:36.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.496 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:02.496 nvme0n1 : 2.00 5521.38 690.17 0.00 0.00 2889.96 2184.53 13398.47 00:35:02.496 [2024-11-19T22:59:36.808Z] =================================================================================================================== 00:35:02.496 [2024-11-19T22:59:36.808Z] Total : 5521.38 690.17 0.00 0.00 2889.96 2184.53 13398.47 00:35:02.496 { 00:35:02.496 "results": [ 00:35:02.496 { 00:35:02.496 "job": "nvme0n1", 00:35:02.496 "core_mask": "0x2", 00:35:02.496 "workload": "randwrite", 00:35:02.496 "status": "finished", 00:35:02.496 "queue_depth": 16, 00:35:02.496 "io_size": 131072, 00:35:02.496 "runtime": 2.004572, 00:35:02.496 "iops": 5521.378129595744, 00:35:02.496 "mibps": 690.172266199468, 00:35:02.496 "io_failed": 0, 00:35:02.496 "io_timeout": 0, 00:35:02.496 "avg_latency_us": 2889.963231203737, 00:35:02.496 "min_latency_us": 2184.5333333333333, 00:35:02.496 "max_latency_us": 13398.471111111112 00:35:02.496 } 00:35:02.496 ], 00:35:02.496 "core_count": 1 00:35:02.496 } 00:35:02.496 23:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:02.496 23:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:02.496 23:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:02.496 23:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:02.496 23:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:02.496 | select(.opcode=="crc32c") 00:35:02.496 | "\(.module_name) \(.executed)"' 00:35:02.755 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:02.755 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:02.755 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:02.755 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:02.755 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 333384 00:35:02.755 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 333384 ']' 00:35:02.755 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 333384 00:35:02.755 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:02.755 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:02.755 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333384 00:35:03.013 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:03.013 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:03.013 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333384' 00:35:03.013 killing process with pid 333384 00:35:03.013 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 333384 00:35:03.013 Received shutdown signal, test time was about 2.000000 seconds 00:35:03.013 00:35:03.013 Latency(us) 00:35:03.013 [2024-11-19T22:59:37.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.013 [2024-11-19T22:59:37.325Z] =================================================================================================================== 00:35:03.013 [2024-11-19T22:59:37.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:03.013 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 333384 00:35:03.013 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 332026 00:35:03.013 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 332026 ']' 00:35:03.013 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 332026 00:35:03.013 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:03.013 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:03.013 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 332026 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 332026' 00:35:03.272 killing process with pid 332026 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 332026 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 332026 00:35:03.272 00:35:03.272 real 0m15.644s 00:35:03.272 user 0m31.491s 00:35:03.272 sys 0m4.254s 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:03.272 ************************************ 00:35:03.272 END TEST nvmf_digest_clean 00:35:03.272 ************************************ 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:03.272 ************************************ 00:35:03.272 START TEST nvmf_digest_error 00:35:03.272 ************************************ 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.272 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.530 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=333932 00:35:03.530 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:03.530 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 333932 00:35:03.530 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 333932 ']' 00:35:03.530 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.530 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.530 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.530 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.530 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.530 [2024-11-19 23:59:37.630683] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:35:03.530 [2024-11-19 23:59:37.630765] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.530 [2024-11-19 23:59:37.703423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.530 [2024-11-19 23:59:37.751745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.531 [2024-11-19 23:59:37.751812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.531 [2024-11-19 23:59:37.751825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.531 [2024-11-19 23:59:37.751837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.531 [2024-11-19 23:59:37.751846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.531 [2024-11-19 23:59:37.752500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.790 [2024-11-19 23:59:37.901307] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.790 23:59:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.790 null0 00:35:03.790 [2024-11-19 23:59:38.017229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:03.790 [2024-11-19 23:59:38.041503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=333968 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 333968 /var/tmp/bperf.sock 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 333968 ']' 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:03.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:03.790 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:03.790 [2024-11-19 23:59:38.092270] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:35:03.790 [2024-11-19 23:59:38.092365] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333968 ] 00:35:04.049 [2024-11-19 23:59:38.158812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.049 [2024-11-19 23:59:38.205243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.049 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.049 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:04.049 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:04.049 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:04.308 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:04.308 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.308 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.308 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.308 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.308 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.963 nvme0n1 00:35:04.963 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:04.963 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.963 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:04.963 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.963 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:04.963 23:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:04.963 Running I/O for 2 seconds... 00:35:04.963 [2024-11-19 23:59:39.139294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:04.963 [2024-11-19 23:59:39.139364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.963 [2024-11-19 23:59:39.139387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.963 [2024-11-19 23:59:39.155751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:04.963 [2024-11-19 23:59:39.155791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.963 [2024-11-19 23:59:39.155812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.963 [2024-11-19 23:59:39.173638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:04.963 [2024-11-19 23:59:39.173677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.963 [2024-11-19 23:59:39.173697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.963 [2024-11-19 23:59:39.185366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:04.963 [2024-11-19 23:59:39.185404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.963 [2024-11-19 23:59:39.185423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.963 [2024-11-19 23:59:39.202500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:04.963 [2024-11-19 23:59:39.202537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.963 [2024-11-19 23:59:39.202558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.963 [2024-11-19 23:59:39.218927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:04.963 [2024-11-19 23:59:39.218976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.964 [2024-11-19 23:59:39.218996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.964 [2024-11-19 23:59:39.231234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:04.964 [2024-11-19 23:59:39.231265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.964 [2024-11-19 23:59:39.231281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.964 [2024-11-19 23:59:39.248382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:04.964 [2024-11-19 23:59:39.248421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.964 [2024-11-19 23:59:39.248441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:04.964 [2024-11-19 23:59:39.264452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:04.964 [2024-11-19 23:59:39.264490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:04.964 [2024-11-19 23:59:39.264509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.278658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.278697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.278718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.291976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.292013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.292033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.309724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.309760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.309781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.327255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.327287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.327318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.344659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.344695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.344715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.360448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.360486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.360506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.372616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.372652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.372671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.388042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.388089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.388124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.403581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.403618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.403638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.417164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.417195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.417213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.432828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.432864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.432883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.444081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.444139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.444155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.459252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.459288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.459310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.473388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.473421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.473446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.486053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.486099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.486134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.499284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.499316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.499333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.514066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.514105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.514137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.222 [2024-11-19 23:59:39.527183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.222 [2024-11-19 23:59:39.527215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.222 [2024-11-19 23:59:39.527236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.539017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.539064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.539090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.555166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.555197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.555214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.567661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.567691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.567708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.581838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.581870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.581887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.597265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.597299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.597316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.609862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.609892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.609908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.624886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.624916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.624937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.635646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.635678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.635697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.650220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.650253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.650271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.664876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.664906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.664922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.677485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.677516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.677533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.689150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.689180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.689197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.702252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.702288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.702316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.714989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.715020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.715037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.726749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.726780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.726796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.740708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.740738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.740754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.752224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.752255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.752271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.765830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.765861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.765877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.481 [2024-11-19 23:59:39.782465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.481 [2024-11-19 23:59:39.782495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.481 [2024-11-19 23:59:39.782511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.740 [2024-11-19 23:59:39.798160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.740 [2024-11-19 23:59:39.798197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.740 [2024-11-19 23:59:39.798216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.740 [2024-11-19 23:59:39.809917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.740 [2024-11-19 23:59:39.809950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.740 [2024-11-19 23:59:39.809971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.825611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.825652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.825669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.842405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.842437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.842468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.853814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.853844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.853860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.868566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.868598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.868615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.883551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.883597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.883614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.894949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.894980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.894996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.910180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.910213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.910231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.922994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.923028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.923060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.937222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.937254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.937272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.948754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.948800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.948818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.962967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.962998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.963016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.975465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.975495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.975511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:39.990618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:39.990651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:39.990668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:40.004701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:40.004733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:40.004749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:40.016600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:40.016636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:40.016653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:40.029634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:40.029669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:40.029687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:05.741 [2024-11-19 23:59:40.043645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:05.741 [2024-11-19 23:59:40.043682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:05.741 [2024-11-19 23:59:40.043700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.000 [2024-11-19 23:59:40.057800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.000 [2024-11-19 23:59:40.057838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.000 [2024-11-19 23:59:40.057875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.000 [2024-11-19 23:59:40.069575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.000 [2024-11-19 23:59:40.069611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.000 [2024-11-19 23:59:40.069629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.000 [2024-11-19 23:59:40.081892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.000 [2024-11-19 23:59:40.081927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.000 [2024-11-19 23:59:40.081944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.000 [2024-11-19 23:59:40.095157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.000 [2024-11-19 23:59:40.095189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.000 [2024-11-19 23:59:40.095207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.108711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.108740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.108757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 17988.00 IOPS, 70.27 MiB/s [2024-11-19T22:59:40.313Z] [2024-11-19 23:59:40.121932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.121962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.121978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.134822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.134854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.134871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.149902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.149933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.149950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.161626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.161657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.161673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.176600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.176632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.176650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.189935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.189967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.189985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.201516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.201548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.201565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.214480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.214525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.214541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.226547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.226579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.226596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.241766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.241796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.241813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.255153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.255188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.255205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.268572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.268604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.268622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.280379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.280424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.280450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.293252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.293284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.293316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.001 [2024-11-19 23:59:40.306722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.001 [2024-11-19 23:59:40.306772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.001 [2024-11-19 23:59:40.306790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.260 [2024-11-19 23:59:40.318696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.260 [2024-11-19 23:59:40.318742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.260 [2024-11-19 23:59:40.318760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.260 [2024-11-19 23:59:40.330826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.260 [2024-11-19 23:59:40.330873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.260 [2024-11-19 23:59:40.330890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.260 [2024-11-19 23:59:40.345822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.260 [2024-11-19 23:59:40.345853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.260 [2024-11-19 23:59:40.345869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.260 [2024-11-19 23:59:40.358663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.260 [2024-11-19 23:59:40.358696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.260 [2024-11-19 23:59:40.358729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.260 [2024-11-19 23:59:40.371773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.260 [2024-11-19 23:59:40.371802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.260 [2024-11-19 23:59:40.371818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.260 [2024-11-19 23:59:40.384687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.260 [2024-11-19 23:59:40.384719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.260 [2024-11-19 23:59:40.384736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.260 [2024-11-19 23:59:40.397012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.260 [2024-11-19 23:59:40.397103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.260 [2024-11-19 23:59:40.397123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.260 [2024-11-19 23:59:40.409952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.260 [2024-11-19 23:59:40.409983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.260 [2024-11-19 23:59:40.409999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.260 [2024-11-19 23:59:40.423204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.260 [2024-11-19 23:59:40.423236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.260 [2024-11-19 23:59:40.423253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.260 [2024-11-19 23:59:40.435505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.260 [2024-11-19 23:59:40.435536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.260 [2024-11-19 23:59:40.435553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.261 [2024-11-19 23:59:40.449179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.261 [2024-11-19 23:59:40.449211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.261 [2024-11-19 23:59:40.449229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.261 [2024-11-19 23:59:40.462747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.261 [2024-11-19 23:59:40.462778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.261 [2024-11-19 23:59:40.462795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.261 [2024-11-19 23:59:40.473946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.261 [2024-11-19 23:59:40.473979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.261 [2024-11-19 23:59:40.473996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.261 [2024-11-19 23:59:40.488834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.261 [2024-11-19 23:59:40.488867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.261 [2024-11-19 23:59:40.488884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.261 [2024-11-19 23:59:40.500184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.261 [2024-11-19 23:59:40.500216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.261 [2024-11-19 23:59:40.500233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.261 [2024-11-19 23:59:40.514525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.261 [2024-11-19 23:59:40.514555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.261 [2024-11-19 23:59:40.514571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.261 [2024-11-19 23:59:40.527335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.261 [2024-11-19 23:59:40.527382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.261 [2024-11-19 23:59:40.527400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.261 [2024-11-19 23:59:40.540010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.261 [2024-11-19 23:59:40.540039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.261 [2024-11-19 23:59:40.540079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.261 [2024-11-19 23:59:40.552329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.261 [2024-11-19 23:59:40.552375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.261 [2024-11-19 23:59:40.552393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.261 [2024-11-19 23:59:40.565206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.261 [2024-11-19 23:59:40.565239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.261 [2024-11-19 23:59:40.565272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.578130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.578164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.578198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.589707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.589738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.589755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.604608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.604641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.604659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.618662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.618694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.618720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.633692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.633724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.633741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.644744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.644775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.644791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.661242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.661273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.661289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.678062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.678102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.678131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.692275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.692307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.692324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.705081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.705113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.705130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.716595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.716626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.716643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.730180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.730210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.730227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.742457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.742487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.742502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.520 [2024-11-19 23:59:40.756290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.520 [2024-11-19 23:59:40.756323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.520 [2024-11-19 23:59:40.756340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.521 [2024-11-19 23:59:40.769672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.521 [2024-11-19 23:59:40.769704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.521 [2024-11-19 23:59:40.769721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.521 [2024-11-19 23:59:40.781632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.521 [2024-11-19 23:59:40.781663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.521 [2024-11-19 23:59:40.781679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.521 [2024-11-19 23:59:40.797358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.521 [2024-11-19 23:59:40.797409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.521 [2024-11-19 23:59:40.797428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.521 [2024-11-19 23:59:40.814353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.521 [2024-11-19 23:59:40.814399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.521 [2024-11-19 23:59:40.814415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.779 [2024-11-19 23:59:40.831331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.779 [2024-11-19 23:59:40.831377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.779 [2024-11-19 23:59:40.831399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.779 [2024-11-19 23:59:40.844124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.779 [2024-11-19 23:59:40.844172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.779 [2024-11-19 23:59:40.844189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.779 [2024-11-19 23:59:40.857666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.779 [2024-11-19 23:59:40.857703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.779 [2024-11-19 23:59:40.857737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.779 [2024-11-19 23:59:40.871211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.779 [2024-11-19 23:59:40.871259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.779 [2024-11-19 23:59:40.871276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.779 [2024-11-19 23:59:40.884537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.779 [2024-11-19 23:59:40.884574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.779 [2024-11-19 23:59:40.884593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.779 [2024-11-19 23:59:40.898232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.779 [2024-11-19 23:59:40.898266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.779 [2024-11-19 23:59:40.898299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.779 [2024-11-19 23:59:40.911892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.779 [2024-11-19 23:59:40.911929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:40.911948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.780 [2024-11-19 23:59:40.927187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.780 [2024-11-19 23:59:40.927220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:40.927253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.780 [2024-11-19 23:59:40.939745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.780 [2024-11-19 23:59:40.939791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:40.939810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.780 [2024-11-19 23:59:40.956182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.780 [2024-11-19 23:59:40.956215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:40.956233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.780 [2024-11-19 23:59:40.970932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.780 [2024-11-19 23:59:40.970968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:40.970987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.780 [2024-11-19 23:59:40.982998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.780 [2024-11-19 23:59:40.983042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:40.983059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.780 [2024-11-19 23:59:40.996328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.780 [2024-11-19 23:59:40.996378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:40.996398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.780 [2024-11-19 23:59:41.010873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.780 [2024-11-19 23:59:41.010909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:41.010928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.780 [2024-11-19 23:59:41.024235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.780 [2024-11-19 23:59:41.024286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:41.024303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.780 [2024-11-19 23:59:41.040640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.780 [2024-11-19 23:59:41.040686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:41.040705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.780 [2024-11-19 23:59:41.052052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.780 [2024-11-19 23:59:41.052097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:41.052117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.780 [2024-11-19 23:59:41.069109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.780 [2024-11-19 23:59:41.069160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:41.069177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.780 [2024-11-19 23:59:41.085761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:06.780 [2024-11-19 23:59:41.085815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.780 [2024-11-19 23:59:41.085842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.039 [2024-11-19 23:59:41.097721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:07.039 [2024-11-19 23:59:41.097758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.039 [2024-11-19 23:59:41.097779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.039 [2024-11-19 23:59:41.114254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21533f0) 00:35:07.039 [2024-11-19 23:59:41.114301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.039 [2024-11-19 23:59:41.114319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.039 18411.50 IOPS, 71.92 MiB/s 00:35:07.039 Latency(us) 00:35:07.039 [2024-11-19T22:59:41.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.039 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:07.039 nvme0n1 : 2.01 18426.99 71.98 0.00 0.00 6938.90 3519.53 22719.15 00:35:07.039 [2024-11-19T22:59:41.351Z] =================================================================================================================== 00:35:07.039 [2024-11-19T22:59:41.351Z] Total : 18426.99 71.98 0.00 0.00 6938.90 3519.53 22719.15 00:35:07.039 { 00:35:07.039 "results": [ 00:35:07.039 { 00:35:07.039 "job": "nvme0n1", 00:35:07.039 "core_mask": "0x2", 00:35:07.039 "workload": "randread", 00:35:07.039 "status": "finished", 00:35:07.039 "queue_depth": 128, 00:35:07.039 "io_size": 4096, 00:35:07.039 "runtime": 2.005265, 00:35:07.039 "iops": 18426.990946333775, 00:35:07.039 "mibps": 71.98043338411631, 00:35:07.039 "io_failed": 0, 00:35:07.039 "io_timeout": 0, 00:35:07.039 "avg_latency_us": 6938.903348959634, 00:35:07.039 "min_latency_us": 3519.525925925926, 00:35:07.039 "max_latency_us": 22719.146666666667 00:35:07.039 } 00:35:07.039 ], 00:35:07.039 "core_count": 1 00:35:07.039 } 00:35:07.039 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:07.039 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:07.039 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:07.039 | .driver_specific 00:35:07.039 | .nvme_error 00:35:07.039 | .status_code 00:35:07.039 | .command_transient_transport_error' 00:35:07.039 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:07.298 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:35:07.298 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 333968 00:35:07.298 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 333968 ']' 00:35:07.298 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 333968 00:35:07.298 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:07.298 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.298 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333968 00:35:07.298 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:07.298 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:07.298 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333968' 00:35:07.298 killing process with pid 333968 00:35:07.298 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 333968 00:35:07.298 Received shutdown signal, test time was about 2.000000 seconds 00:35:07.298 00:35:07.298 Latency(us) 00:35:07.298 [2024-11-19T22:59:41.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.298 [2024-11-19T22:59:41.610Z] =================================================================================================================== 00:35:07.298 [2024-11-19T22:59:41.610Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.298 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 333968 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=334371 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 334371 /var/tmp/bperf.sock 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 334371 ']' 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:07.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:07.556 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.556 [2024-11-19 23:59:41.702932] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:35:07.556 [2024-11-19 23:59:41.703026] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334371 ] 00:35:07.556 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:07.556 Zero copy mechanism will not be used. 00:35:07.556 [2024-11-19 23:59:41.780450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.556 [2024-11-19 23:59:41.829126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.815 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:07.815 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:07.815 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:07.815 23:59:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.074 23:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:08.074 23:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.074 23:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.074 23:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.074 23:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.074 23:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.332 nvme0n1 00:35:08.332 23:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:08.332 23:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.332 23:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.591 23:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.591 23:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:08.591 23:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:08.591 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:08.591 Zero copy mechanism will not be used. 00:35:08.591 Running I/O for 2 seconds... 00:35:08.591 [2024-11-19 23:59:42.762420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.762489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.762511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.767996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.768035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.768060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.773517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.773553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.773578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.779754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.779791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.779819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.786188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.786221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.786238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.790532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.790569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.790597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.795314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.795369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.795388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.801470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.801506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.801532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.807428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.807465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.807484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.813291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.813323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.813345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.819820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.819857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.819877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.826447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.826485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.826504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.832389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.832425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.832446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.838058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.838116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.838140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.843945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.843981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.844007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.849534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.849570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.849590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.854883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.854919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.854939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.858704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.858739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.858759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.863762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.863797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.863817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.869264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.869295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.869313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.874730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.874762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.874789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.881046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.881097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.881118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.887621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.887658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.887678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.893685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.893722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.893760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.592 [2024-11-19 23:59:42.899790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.592 [2024-11-19 23:59:42.899828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.592 [2024-11-19 23:59:42.899858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.852 [2024-11-19 23:59:42.906708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.852 [2024-11-19 23:59:42.906746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.852 [2024-11-19 23:59:42.906782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.852 [2024-11-19 23:59:42.912955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.852 [2024-11-19 23:59:42.912991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.852 [2024-11-19 23:59:42.913011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.852 [2024-11-19 23:59:42.918658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.852 [2024-11-19 23:59:42.918693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.852 [2024-11-19 23:59:42.918714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.852 [2024-11-19 23:59:42.924306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.852 [2024-11-19 23:59:42.924338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.852 [2024-11-19 23:59:42.924365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.852 [2024-11-19 23:59:42.930759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.852 [2024-11-19 23:59:42.930795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.852 [2024-11-19 23:59:42.930816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.852 [2024-11-19 23:59:42.936969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.852 [2024-11-19 23:59:42.937007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.852 [2024-11-19 23:59:42.937028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.852 [2024-11-19 23:59:42.942884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.852 [2024-11-19 23:59:42.942920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:42.942940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:42.948385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:42.948443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:42.948463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:42.953941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:42.953977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:42.953996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:42.959929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:42.959966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:42.959985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:42.965977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:42.966013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:42.966033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:42.971700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:42.971735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:42.971755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:42.977660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:42.977695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:42.977715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:42.983579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:42.983617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:42.983637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:42.989265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:42.989298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:42.989316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:42.995197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:42.995229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:42.995247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.001189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.001222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.001240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.007310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.007359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.007380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.013210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.013244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.013261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.019317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.019351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.019379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.025185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.025218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.025235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.030958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.030994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.031013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.036811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.036846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.036866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.040446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.040482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.040503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.046244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.046276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.046300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.052151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.052197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.052214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.057848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.057884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.057904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.063870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.063906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.063925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.069308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.069339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.069357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.074545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.074577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.074594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.079629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.079663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.079683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.085194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.085225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.085241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.090357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.853 [2024-11-19 23:59:43.090402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.853 [2024-11-19 23:59:43.090422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.853 [2024-11-19 23:59:43.095702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.095737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.095755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.854 [2024-11-19 23:59:43.100633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.100670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.100690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.854 [2024-11-19 23:59:43.105985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.106021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.106044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.854 [2024-11-19 23:59:43.111253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.111298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.111315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.854 [2024-11-19 23:59:43.116597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.116633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.116653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.854 [2024-11-19 23:59:43.121875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.121909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.121928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.854 [2024-11-19 23:59:43.127223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.127254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.127272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.854 [2024-11-19 23:59:43.132489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.132524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.132543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.854 [2024-11-19 23:59:43.137868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.137903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.137928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.854 [2024-11-19 23:59:43.143245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.143291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.143308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.854 [2024-11-19 23:59:43.149341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.149391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.149411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.854 [2024-11-19 23:59:43.154967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.155002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.155021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.854 [2024-11-19 23:59:43.160729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:08.854 [2024-11-19 23:59:43.160767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.854 [2024-11-19 23:59:43.160787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.166263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.166298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.166316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.171589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.171627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.171647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.176876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.176911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.176931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.182161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.182207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.182224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.187519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.187561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.187582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.192888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.192923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.192943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.198307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.198341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.198375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.203728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.203764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.203784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.209113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.209161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.209178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.214486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.214523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.214543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.220762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.220798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.220817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.226131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.226163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.226180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.231509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.231550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.231569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.237063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.237106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.237141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.242473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.242508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.242528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.248016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.248050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.248085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.253292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.253325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.253343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.259100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.259148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.259166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.265976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.266023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.266044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.273774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.273811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.273831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.280441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.114 [2024-11-19 23:59:43.280477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.114 [2024-11-19 23:59:43.280496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.114 [2024-11-19 23:59:43.286855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.286892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.286919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.293355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.293387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.293420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.299796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.299832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.299852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.304981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.305016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.305036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.308131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.308163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.308180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.313324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.313373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.313394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.318678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.318713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.318733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.324003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.324038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.324058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.330438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.330473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.330493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.335788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.335829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.335849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.341049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.341093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.341124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.347220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.347253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.347271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.353378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.353437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.353458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.359644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.359681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.359701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.365798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.365834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.365854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.371988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.372024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.372044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.378342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.378375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.378409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.384512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.384548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.384568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.390751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.390789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.390809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.396886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.396923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.396943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.402881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.402917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.402937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.409294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.409328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.409364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.416530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.416567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.416587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.115 [2024-11-19 23:59:43.422685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.115 [2024-11-19 23:59:43.422724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.115 [2024-11-19 23:59:43.422744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.375 [2024-11-19 23:59:43.429406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.375 [2024-11-19 23:59:43.429440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.429475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.435915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.435953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.435973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.443348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.443398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.443426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.449297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.449331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.449348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.455241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.455273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.455290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.461176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.461224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.461241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.468292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.468323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.468340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.476083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.476134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.476152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.482795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.482832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.482852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.489596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.489633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.489652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.496049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.496093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.496132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.502740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.502776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.502796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.508353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.508403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.508424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.514801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.514836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.514856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.522514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.522551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.522571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.530404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.530441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.530462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.539137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.539170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.539188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.547443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.547481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.547506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.555306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.555339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.555357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.563236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.563269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.563293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.567752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.567788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.567808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.575611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.575648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.575669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.583837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.583873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.583892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.591808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.591846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.591867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.599841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.599878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.599899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.606948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.606985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.607005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.376 [2024-11-19 23:59:43.612670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.376 [2024-11-19 23:59:43.612707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.376 [2024-11-19 23:59:43.612727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.617505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.617541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.617560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.620790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.620831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.620851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.625318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.625364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.625382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.630675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.630709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.630728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.635955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.635989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.636008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.641152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.641199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.641216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.646467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.646502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.646521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.651632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.651666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.651685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.656852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.656887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.656906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.662131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.662161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.662191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.667599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.667634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.667653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.673017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.673052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.673084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.677194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.677229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.677247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.377 [2024-11-19 23:59:43.680446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.377 [2024-11-19 23:59:43.680492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.377 [2024-11-19 23:59:43.680527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.685738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.685775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.685794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.691270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.691318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.691335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.696817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.696853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.696873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.702270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.702300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.702317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.707658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.707693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.707720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.712971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.713006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.713025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.718381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.718416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.718436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.723795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.723831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.723850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.729694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.729731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.729751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.735372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.735404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.735438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.740916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.740948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.740967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.746356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.746407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.746425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.751816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.751854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.751874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.637 5246.00 IOPS, 655.75 MiB/s [2024-11-19T22:59:43.949Z] [2024-11-19 23:59:43.758942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.758984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.759004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.764339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.764386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.764402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.769695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.769730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.769750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.775001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.775036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.775055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.781238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.781286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.781310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.788559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.788596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.788616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.795863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.795900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.795919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.802955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.637 [2024-11-19 23:59:43.802991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.637 [2024-11-19 23:59:43.803011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.637 [2024-11-19 23:59:43.809835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.809872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.809898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.816201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.816235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.816253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.822382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.822418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.822438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.828136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.828183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.828201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.834010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.834045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.834065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.840425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.840463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.840483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.846800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.846837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.846857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.852689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.852725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.852746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.858880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.858917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.858936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.865266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.865305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.865324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.871179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.871212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.871230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.877096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.877147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.877165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.883527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.883564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.883584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.890275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.890308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.890326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.893905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.893940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.893959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.899951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.899988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.900008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.905790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.905826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.905846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.911482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.911515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.911533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.916883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.916919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.916940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.922219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.922250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.922267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.927513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.927548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.927567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.932816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.932851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.932870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.938127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.938157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.938173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.638 [2024-11-19 23:59:43.943728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.638 [2024-11-19 23:59:43.943765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-11-19 23:59:43.943785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:43.949030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:43.949067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:43.949114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:43.954512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:43.954551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:43.954571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:43.959853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:43.959889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:43.959916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:43.963458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:43.963500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:43.963521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:43.967897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:43.967932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:43.967951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:43.973098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:43.973148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:43.973166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:43.979148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:43.979185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:43.979203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:43.984594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:43.984630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:43.984649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:43.990707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:43.990742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:43.990762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:43.997798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:43.997848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:43.997868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:44.005534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:44.005571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:44.005591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:44.013313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:44.013354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:44.013390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:44.020818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:44.020852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-11-19 23:59:44.020869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.898 [2024-11-19 23:59:44.028400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.898 [2024-11-19 23:59:44.028433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.028450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.036095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.036155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.036173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.043607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.043641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.043660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.051278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.051310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.051328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.058814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.058846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.058863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.066538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.066585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.066601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.074444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.074492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.074510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.082614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.082662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.082680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.090936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.090984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.091001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.098474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.098508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.098526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.106745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.106779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.106797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.114024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.114080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.114101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.119869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.119902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.119920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.125722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.125756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.125775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.131450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.131483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.131502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.137672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.137705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.137731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.143940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.143987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.144005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.149902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.149933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.149950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.155884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.155916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.155933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.162204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.162236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.162254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.168083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.168116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.168133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.173858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.173905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.173923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.180326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.180376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.180394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.186781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.186829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.186847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.192901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.192934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.192951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.200515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.200561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.200578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.899 [2024-11-19 23:59:44.206983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:09.899 [2024-11-19 23:59:44.207017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.899 [2024-11-19 23:59:44.207040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.159 [2024-11-19 23:59:44.212744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.159 [2024-11-19 23:59:44.212780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.159 [2024-11-19 23:59:44.212799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.159 [2024-11-19 23:59:44.218768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.159 [2024-11-19 23:59:44.218817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.159 [2024-11-19 23:59:44.218835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.159 [2024-11-19 23:59:44.223917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.159 [2024-11-19 23:59:44.223949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.159 [2024-11-19 23:59:44.223967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.159 [2024-11-19 23:59:44.229059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.159 [2024-11-19 23:59:44.229099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.159 [2024-11-19 23:59:44.229117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.159 [2024-11-19 23:59:44.234065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.159 [2024-11-19 23:59:44.234103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.159 [2024-11-19 23:59:44.234121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.159 [2024-11-19 23:59:44.239160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.159 [2024-11-19 23:59:44.239193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.159 [2024-11-19 23:59:44.239218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.159 [2024-11-19 23:59:44.243621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.159 [2024-11-19 23:59:44.243653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.159 [2024-11-19 23:59:44.243671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.159 [2024-11-19 23:59:44.246716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.159 [2024-11-19 23:59:44.246747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.159 [2024-11-19 23:59:44.246765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.159 [2024-11-19 23:59:44.251777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.251808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.251826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.257631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.257663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.257689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.265063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.265103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.265121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.270728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.270761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.270779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.276743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.276776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.276794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.283104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.283136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.283153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.288644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.288690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.288709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.294997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.295029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.295048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.300953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.300987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.301005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.307119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.307152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.307170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.312903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.312936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.312954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.318616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.318650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.318668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.325612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.325646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.325664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.333633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.333666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.333684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.340951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.340985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.341003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.344958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.344990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.345007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.352215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.352249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.352267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.359388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.359435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.359452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.365556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.365588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.365606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.371871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.371904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.371922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.377231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.377265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.377283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.382807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.382843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.382862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.388717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.388750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.388771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.394461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.394494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.394520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.399308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.399341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.399366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.404130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.404162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.404179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.409079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.409110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.409127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.414021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.414053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.414078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.418898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.418944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.418968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.423874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.423906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.423938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.428878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.428909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.428926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.433874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.433906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.433924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.438728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.438766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.160 [2024-11-19 23:59:44.438785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.160 [2024-11-19 23:59:44.443659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.160 [2024-11-19 23:59:44.443689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.161 [2024-11-19 23:59:44.443707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.161 [2024-11-19 23:59:44.448599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.161 [2024-11-19 23:59:44.448632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.161 [2024-11-19 23:59:44.448650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.161 [2024-11-19 23:59:44.453236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.161 [2024-11-19 23:59:44.453268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.161 [2024-11-19 23:59:44.453285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.161 [2024-11-19 23:59:44.458062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.161 [2024-11-19 23:59:44.458102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.161 [2024-11-19 23:59:44.458120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.161 [2024-11-19 23:59:44.462888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.161 [2024-11-19 23:59:44.462924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.161 [2024-11-19 23:59:44.462941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.161 [2024-11-19 23:59:44.467895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.161 [2024-11-19 23:59:44.467929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.161 [2024-11-19 23:59:44.467946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.472918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.472953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.472971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.477766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.477800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.477823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.482508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.482541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.482558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.487400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.487451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.487468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.491961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.491994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.492012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.494825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.494856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.494874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.499918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.499951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.499970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.505424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.505457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.505475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.510266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.510299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.510317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.515530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.515563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.515581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.521225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.521264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.521283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.527102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.527150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.527168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.532663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.532696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.532714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.536533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.536565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.536583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.541247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.541281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.541299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.547322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.421 [2024-11-19 23:59:44.547357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.421 [2024-11-19 23:59:44.547391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.421 [2024-11-19 23:59:44.553424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.553456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.553474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.559323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.559356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.559374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.564942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.564989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.565006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.570184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.570218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.570236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.576313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.576346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.576363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.582511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.582544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.582561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.588425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.588459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.588477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.592427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.592460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.592477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.597347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.597395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.597412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.603390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.603421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.603437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.609423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.609471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.609490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.614884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.614920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.614948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.620642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.620675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.620696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.626668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.626701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.626718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.632829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.632861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.632879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.638855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.638887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.638905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.645344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.645378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.645396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.651228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.651261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.651279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.656671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.656704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.656722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.661546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.661577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.661599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.667020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.667060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.667086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.672523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.672554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.672571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.679204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.679236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.679255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.686796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.686829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.686862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.692723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.692769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.692786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.698696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.698744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.698762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.704536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.422 [2024-11-19 23:59:44.704573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.422 [2024-11-19 23:59:44.704592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.422 [2024-11-19 23:59:44.710349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.423 [2024-11-19 23:59:44.710381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.423 [2024-11-19 23:59:44.710399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.423 [2024-11-19 23:59:44.713210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.423 [2024-11-19 23:59:44.713242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.423 [2024-11-19 23:59:44.713259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.423 [2024-11-19 23:59:44.717574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.423 [2024-11-19 23:59:44.717606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.423 [2024-11-19 23:59:44.717623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.423 [2024-11-19 23:59:44.722938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.423 [2024-11-19 23:59:44.722970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.423 [2024-11-19 23:59:44.723002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.423 [2024-11-19 23:59:44.728872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.423 [2024-11-19 23:59:44.728907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.423 [2024-11-19 23:59:44.728926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.682 [2024-11-19 23:59:44.735060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.682 [2024-11-19 23:59:44.735103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.682 [2024-11-19 23:59:44.735122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.682 [2024-11-19 23:59:44.740735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.682 [2024-11-19 23:59:44.740768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.682 [2024-11-19 23:59:44.740785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.682 [2024-11-19 23:59:44.745716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.682 [2024-11-19 23:59:44.745748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.682 [2024-11-19 23:59:44.745765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.682 [2024-11-19 23:59:44.750621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.682 [2024-11-19 23:59:44.750652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.682 [2024-11-19 23:59:44.750685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.682 5281.00 IOPS, 660.12 MiB/s [2024-11-19T22:59:44.994Z] [2024-11-19 23:59:44.757209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x220b920) 00:35:10.682 [2024-11-19 23:59:44.757240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.682 [2024-11-19 23:59:44.757256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.682 00:35:10.682 Latency(us) 00:35:10.682 [2024-11-19T22:59:44.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.682 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:10.682 nvme0n1 : 2.00 5282.33 660.29 0.00 0.00 3024.10 764.59 15146.10 00:35:10.682 [2024-11-19T22:59:44.994Z] =================================================================================================================== 00:35:10.682 [2024-11-19T22:59:44.994Z] Total : 5282.33 660.29 0.00 0.00 3024.10 764.59 15146.10 00:35:10.682 { 00:35:10.682 "results": [ 00:35:10.682 { 00:35:10.682 "job": "nvme0n1", 00:35:10.682 "core_mask": "0x2", 00:35:10.682 "workload": "randread", 00:35:10.682 "status": "finished", 00:35:10.682 "queue_depth": 16, 00:35:10.682 "io_size": 131072, 00:35:10.682 "runtime": 2.002524, 00:35:10.682 "iops": 5282.333694877065, 00:35:10.682 "mibps": 660.2917118596331, 00:35:10.682 "io_failed": 0, 00:35:10.682 "io_timeout": 0, 00:35:10.682 "avg_latency_us": 3024.1021872089523, 00:35:10.682 "min_latency_us": 764.5866666666667, 00:35:10.682 "max_latency_us": 15146.097777777777 00:35:10.682 } 00:35:10.682 ], 00:35:10.682 "core_count": 1 00:35:10.682 } 00:35:10.682 23:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:10.682 23:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:10.682 23:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:10.682 23:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:10.682 | .driver_specific 00:35:10.682 | .nvme_error 00:35:10.682 | .status_code 00:35:10.682 | .command_transient_transport_error' 00:35:10.941 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 342 > 0 )) 00:35:10.941 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 334371 00:35:10.941 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 334371 ']' 00:35:10.941 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 334371 00:35:10.941 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:10.941 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.941 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 334371 00:35:10.941 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:10.941 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:10.941 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 334371' 00:35:10.941 killing process with pid 334371 00:35:10.941 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 334371 00:35:10.941 Received shutdown signal, test time was about 2.000000 seconds 00:35:10.941 00:35:10.941 Latency(us) 00:35:10.941 [2024-11-19T22:59:45.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.941 [2024-11-19T22:59:45.253Z] =================================================================================================================== 00:35:10.941 [2024-11-19T22:59:45.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:10.941 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 334371 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=334785 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 334785 /var/tmp/bperf.sock 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 334785 ']' 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.200 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:11.200 [2024-11-19 23:59:45.304926] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:35:11.200 [2024-11-19 23:59:45.305018] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334785 ] 00:35:11.200 [2024-11-19 23:59:45.376353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.200 [2024-11-19 23:59:45.430985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.459 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:11.459 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:11.459 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.459 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.718 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:11.718 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.718 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:11.718 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.718 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.718 23:59:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.976 nvme0n1 00:35:12.234 23:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:12.234 23:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.234 23:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.234 23:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.234 23:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:12.234 23:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.234 Running I/O for 2 seconds... 00:35:12.234 [2024-11-19 23:59:46.422408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166ec840 00:35:12.234 [2024-11-19 23:59:46.423413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.234 [2024-11-19 23:59:46.423476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:12.234 [2024-11-19 23:59:46.435663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166ed920 00:35:12.234 [2024-11-19 23:59:46.436801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.235 [2024-11-19 23:59:46.436851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:12.235 [2024-11-19 23:59:46.448993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e3498 00:35:12.235 [2024-11-19 23:59:46.450422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.235 [2024-11-19 23:59:46.450469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:12.235 [2024-11-19 23:59:46.462311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166ef6a8 00:35:12.235 [2024-11-19 23:59:46.463936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.235 [2024-11-19 23:59:46.463984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:12.235 [2024-11-19 23:59:46.473918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166de470 00:35:12.235 [2024-11-19 23:59:46.475356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.235 [2024-11-19 23:59:46.475388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:12.235 [2024-11-19 23:59:46.488184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e0ea0 00:35:12.235 [2024-11-19 23:59:46.489524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.235 [2024-11-19 23:59:46.489552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:12.235 [2024-11-19 23:59:46.499799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e9168 00:35:12.235 [2024-11-19 23:59:46.501283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.235 [2024-11-19 23:59:46.501312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:12.235 [2024-11-19 23:59:46.511738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e1b48 00:35:12.235 [2024-11-19 23:59:46.513055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.235 [2024-11-19 23:59:46.513093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:12.235 [2024-11-19 23:59:46.523252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166f2948 00:35:12.235 [2024-11-19 23:59:46.524298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.235 [2024-11-19 23:59:46.524334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:12.235 [2024-11-19 23:59:46.535020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e5ec8 00:35:12.235 [2024-11-19 23:59:46.536207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.235 [2024-11-19 23:59:46.536240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:12.494 [2024-11-19 23:59:46.550258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166f57b0 00:35:12.494 [2024-11-19 23:59:46.552177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.494 [2024-11-19 23:59:46.552211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:12.494 [2024-11-19 23:59:46.563510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166f8a50 00:35:12.494 [2024-11-19 23:59:46.565579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.494 [2024-11-19 23:59:46.565629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.572066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eaab8 00:35:12.495 [2024-11-19 23:59:46.572998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.573027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.586884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e49b0 00:35:12.495 [2024-11-19 23:59:46.588339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.588369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.599287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e4de8 00:35:12.495 [2024-11-19 23:59:46.600798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.600844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.612053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166f6020 00:35:12.495 [2024-11-19 23:59:46.613977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.614005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.624809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166fb048 00:35:12.495 [2024-11-19 23:59:46.626683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.626712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.633977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166fb8b8 00:35:12.495 [2024-11-19 23:59:46.635102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.635131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.649433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166f57b0 00:35:12.495 [2024-11-19 23:59:46.651097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.651141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.662564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166ec408 00:35:12.495 [2024-11-19 23:59:46.664412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.664445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.675532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166f6020 00:35:12.495 [2024-11-19 23:59:46.677542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.677589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.688029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166ed4e8 00:35:12.495 [2024-11-19 23:59:46.689932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.689961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.700714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e6738 00:35:12.495 [2024-11-19 23:59:46.702500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.702529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.711865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166fcdd0 00:35:12.495 [2024-11-19 23:59:46.713577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.713608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.724503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e9168 00:35:12.495 [2024-11-19 23:59:46.725977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.726007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.736990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166ef6a8 00:35:12.495 [2024-11-19 23:59:46.738305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.738349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.750059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166feb58 00:35:12.495 [2024-11-19 23:59:46.751344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.751382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.761410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e6b70 00:35:12.495 [2024-11-19 23:59:46.762562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.762590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.774033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166f7da8 00:35:12.495 [2024-11-19 23:59:46.775539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.775568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.786901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e73e0 00:35:12.495 [2024-11-19 23:59:46.788674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.788707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:12.495 [2024-11-19 23:59:46.798264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166fa7d8 00:35:12.495 [2024-11-19 23:59:46.799628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.495 [2024-11-19 23:59:46.799657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.810677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e95a0 00:35:12.755 [2024-11-19 23:59:46.812191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.812224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.823392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166de8a8 00:35:12.755 [2024-11-19 23:59:46.824938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.824967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.835662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166fe720 00:35:12.755 [2024-11-19 23:59:46.836625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.836669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.847830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e27f0 00:35:12.755 [2024-11-19 23:59:46.848731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.848782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.859643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e6fa8 00:35:12.755 [2024-11-19 23:59:46.860705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.860735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.872297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166de470 00:35:12.755 [2024-11-19 23:59:46.873514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.873548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.885640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166ec408 00:35:12.755 [2024-11-19 23:59:46.887012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.887046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.898434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166f7da8 00:35:12.755 [2024-11-19 23:59:46.899306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.899350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.910366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166fc998 00:35:12.755 [2024-11-19 23:59:46.911184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.911228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.925543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166f46d0 00:35:12.755 [2024-11-19 23:59:46.927442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.927509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.934316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e5658 00:35:12.755 [2024-11-19 23:59:46.935043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.935103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.947645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166fb480 00:35:12.755 [2024-11-19 23:59:46.948698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.948732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.960794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e8088 00:35:12.755 [2024-11-19 23:59:46.961847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.961881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.975985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e95a0 00:35:12.755 [2024-11-19 23:59:46.977700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.977734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.989336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166ee190 00:35:12.755 [2024-11-19 23:59:46.991298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.991328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:46.998481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166e27f0 00:35:12.755 [2024-11-19 23:59:46.999400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:46.999446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:47.014524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:12.755 [2024-11-19 23:59:47.014814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:47.014847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:47.028803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:12.755 [2024-11-19 23:59:47.029094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:47.029127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:47.043326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:12.755 [2024-11-19 23:59:47.043618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:47.043650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:12.755 [2024-11-19 23:59:47.057714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:12.755 [2024-11-19 23:59:47.057989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.755 [2024-11-19 23:59:47.058022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.014 [2024-11-19 23:59:47.071785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.014 [2024-11-19 23:59:47.072020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.014 [2024-11-19 23:59:47.072075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.014 [2024-11-19 23:59:47.086173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.014 [2024-11-19 23:59:47.086438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.014 [2024-11-19 23:59:47.086484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.014 [2024-11-19 23:59:47.100474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.014 [2024-11-19 23:59:47.100735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.014 [2024-11-19 23:59:47.100783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.014 [2024-11-19 23:59:47.114914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.014 [2024-11-19 23:59:47.115227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.014 [2024-11-19 23:59:47.115257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.014 [2024-11-19 23:59:47.129216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.014 [2024-11-19 23:59:47.129482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.014 [2024-11-19 23:59:47.129518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.014 [2024-11-19 23:59:47.143545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.014 [2024-11-19 23:59:47.143805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.014 [2024-11-19 23:59:47.143853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.014 [2024-11-19 23:59:47.157922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.014 [2024-11-19 23:59:47.158186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.014 [2024-11-19 23:59:47.158218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.014 [2024-11-19 23:59:47.172196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.014 [2024-11-19 23:59:47.172519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.014 [2024-11-19 23:59:47.172567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.014 [2024-11-19 23:59:47.186718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.015 [2024-11-19 23:59:47.187043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.015 [2024-11-19 23:59:47.187096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.015 [2024-11-19 23:59:47.201102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.015 [2024-11-19 23:59:47.201400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.015 [2024-11-19 23:59:47.201449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.015 [2024-11-19 23:59:47.215383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.015 [2024-11-19 23:59:47.215643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.015 [2024-11-19 23:59:47.215688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.015 [2024-11-19 23:59:47.230004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.015 [2024-11-19 23:59:47.230336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.015 [2024-11-19 23:59:47.230379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.015 [2024-11-19 23:59:47.244367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.015 [2024-11-19 23:59:47.244630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.015 [2024-11-19 23:59:47.244682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.015 [2024-11-19 23:59:47.258689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.015 [2024-11-19 23:59:47.258992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.015 [2024-11-19 23:59:47.259024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.015 [2024-11-19 23:59:47.273196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.015 [2024-11-19 23:59:47.273466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.015 [2024-11-19 23:59:47.273499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.015 [2024-11-19 23:59:47.287503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.015 [2024-11-19 23:59:47.287781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.015 [2024-11-19 23:59:47.287813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.015 [2024-11-19 23:59:47.301845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.015 [2024-11-19 23:59:47.302130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.015 [2024-11-19 23:59:47.302162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.015 [2024-11-19 23:59:47.316334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.015 [2024-11-19 23:59:47.316627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.015 [2024-11-19 23:59:47.316659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.330580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.330823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.330863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.344908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.345252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.345283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.359373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.359671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.359706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.373871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.374184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.374214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.388397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.388670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.388703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.402908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.403206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.403239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 19140.00 IOPS, 74.77 MiB/s [2024-11-19T22:59:47.586Z] [2024-11-19 23:59:47.417282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.417578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.417610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.431655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.431945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.431976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.446001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.446297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.446328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.460600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.460906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.460940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.475085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.475378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.475411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.489370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.489641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.489675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.503735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.504007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.504039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.518190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.518497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.518544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.532592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.532897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.532931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.547095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.547376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.547417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.561536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.561824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.561859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.274 [2024-11-19 23:59:47.576004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.274 [2024-11-19 23:59:47.576297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.274 [2024-11-19 23:59:47.576348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.533 [2024-11-19 23:59:47.590317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.590573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.590613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.604590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.604868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.604903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.619034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.619305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.619335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.633374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.633636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.633673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.647724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.648004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.648038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.662081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.662330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.662377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.676609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.676893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.676927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.690870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.691176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.691205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.705190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.705521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.705570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.719475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.719755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.719789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.733847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.734118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.734165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.748139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.748417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.748450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.762555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.762833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.762867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.777014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.777291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.777321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.791526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.791823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.791856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.806007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.806320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.806366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.820575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.820853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.820886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.534 [2024-11-19 23:59:47.834982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.534 [2024-11-19 23:59:47.835270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.534 [2024-11-19 23:59:47.835313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.793 [2024-11-19 23:59:47.849316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.793 [2024-11-19 23:59:47.849593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.793 [2024-11-19 23:59:47.849628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.793 [2024-11-19 23:59:47.863684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.793 [2024-11-19 23:59:47.863975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.793 [2024-11-19 23:59:47.864008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.793 [2024-11-19 23:59:47.878086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.793 [2024-11-19 23:59:47.878463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.793 [2024-11-19 23:59:47.878510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.793 [2024-11-19 23:59:47.892535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.793 [2024-11-19 23:59:47.892880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.793 [2024-11-19 23:59:47.892909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.793 [2024-11-19 23:59:47.906887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.793 [2024-11-19 23:59:47.907157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.793 [2024-11-19 23:59:47.907185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.793 [2024-11-19 23:59:47.921261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.793 [2024-11-19 23:59:47.921526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.793 [2024-11-19 23:59:47.921555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.793 [2024-11-19 23:59:47.935587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.793 [2024-11-19 23:59:47.935868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.793 [2024-11-19 23:59:47.935897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.793 [2024-11-19 23:59:47.949794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.793 [2024-11-19 23:59:47.950082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.793 [2024-11-19 23:59:47.950133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.793 [2024-11-19 23:59:47.964458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.793 [2024-11-19 23:59:47.964722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.793 [2024-11-19 23:59:47.964769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.793 [2024-11-19 23:59:47.978699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.793 [2024-11-19 23:59:47.978986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.793 [2024-11-19 23:59:47.979029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.793 [2024-11-19 23:59:47.993155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.794 [2024-11-19 23:59:47.993453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.794 [2024-11-19 23:59:47.993480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.794 [2024-11-19 23:59:48.007466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.794 [2024-11-19 23:59:48.007789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.794 [2024-11-19 23:59:48.007831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.794 [2024-11-19 23:59:48.021894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.794 [2024-11-19 23:59:48.022200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.794 [2024-11-19 23:59:48.022229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.794 [2024-11-19 23:59:48.036205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.794 [2024-11-19 23:59:48.036493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.794 [2024-11-19 23:59:48.036535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.794 [2024-11-19 23:59:48.050545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.794 [2024-11-19 23:59:48.050823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.794 [2024-11-19 23:59:48.050850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.794 [2024-11-19 23:59:48.065017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.794 [2024-11-19 23:59:48.065308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.794 [2024-11-19 23:59:48.065351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.794 [2024-11-19 23:59:48.079340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.794 [2024-11-19 23:59:48.079673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.794 [2024-11-19 23:59:48.079717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.794 [2024-11-19 23:59:48.093929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:13.794 [2024-11-19 23:59:48.094209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.794 [2024-11-19 23:59:48.094237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.108248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.108526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.108569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.122582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.122860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.122888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.136910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.137169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.137212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.151375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.151674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.151702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.165838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.166126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.166157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.180225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.180487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.180514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.194561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.194821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.194849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.208816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.209082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.209110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.223212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.223482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.223524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.237616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.237878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.237906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.251905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.252174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.252218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.266228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.266510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.266553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.280592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.280900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.053 [2024-11-19 23:59:48.280942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.053 [2024-11-19 23:59:48.294888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.053 [2024-11-19 23:59:48.295155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.054 [2024-11-19 23:59:48.295183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.054 [2024-11-19 23:59:48.309274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.054 [2024-11-19 23:59:48.309574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.054 [2024-11-19 23:59:48.309618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.054 [2024-11-19 23:59:48.323637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.054 [2024-11-19 23:59:48.323917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.054 [2024-11-19 23:59:48.323965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.054 [2024-11-19 23:59:48.337982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.054 [2024-11-19 23:59:48.338277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.054 [2024-11-19 23:59:48.338307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.054 [2024-11-19 23:59:48.352250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.054 [2024-11-19 23:59:48.352540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.054 [2024-11-19 23:59:48.352573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.313 [2024-11-19 23:59:48.366904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.313 [2024-11-19 23:59:48.367153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.313 [2024-11-19 23:59:48.367188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.313 [2024-11-19 23:59:48.381598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.313 [2024-11-19 23:59:48.381865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.313 [2024-11-19 23:59:48.381895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.313 [2024-11-19 23:59:48.396148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.313 [2024-11-19 23:59:48.396428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.313 [2024-11-19 23:59:48.396460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.313 18447.50 IOPS, 72.06 MiB/s [2024-11-19T22:59:48.625Z] [2024-11-19 23:59:48.410781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e460) with pdu=0x2000166eea00 00:35:14.313 [2024-11-19 23:59:48.411054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.313 [2024-11-19 23:59:48.411107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:14.313 00:35:14.313 Latency(us) 00:35:14.313 [2024-11-19T22:59:48.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.313 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:14.313 nvme0n1 : 2.01 18446.56 72.06 0.00 0.00 6922.16 2754.94 17087.91 00:35:14.313 [2024-11-19T22:59:48.625Z] =================================================================================================================== 00:35:14.313 [2024-11-19T22:59:48.625Z] Total : 18446.56 72.06 0.00 0.00 6922.16 2754.94 17087.91 00:35:14.313 { 00:35:14.313 "results": [ 00:35:14.313 { 00:35:14.313 "job": "nvme0n1", 00:35:14.313 "core_mask": "0x2", 00:35:14.313 "workload": "randwrite", 00:35:14.313 "status": "finished", 00:35:14.313 "queue_depth": 128, 00:35:14.313 "io_size": 4096, 00:35:14.313 "runtime": 2.008776, 00:35:14.313 "iops": 18446.55651003397, 00:35:14.313 "mibps": 72.0568613673202, 00:35:14.313 "io_failed": 0, 00:35:14.313 "io_timeout": 0, 00:35:14.313 "avg_latency_us": 6922.163646631385, 00:35:14.313 "min_latency_us": 2754.9392592592594, 00:35:14.313 "max_latency_us": 17087.905185185184 00:35:14.313 } 00:35:14.313 ], 00:35:14.313 "core_count": 1 00:35:14.313 } 00:35:14.313 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:14.313 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:14.313 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:14.313 | .driver_specific 00:35:14.313 | .nvme_error 00:35:14.313 | .status_code 00:35:14.313 | .command_transient_transport_error' 00:35:14.313 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:14.572 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:35:14.572 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 334785 00:35:14.572 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 334785 ']' 00:35:14.572 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 334785 00:35:14.572 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:14.572 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.572 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 334785 00:35:14.572 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:14.572 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:14.572 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 334785' 00:35:14.572 killing process with pid 334785 00:35:14.572 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 334785 00:35:14.572 Received shutdown signal, test time was about 2.000000 seconds 00:35:14.572 00:35:14.572 Latency(us) 00:35:14.572 [2024-11-19T22:59:48.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.572 [2024-11-19T22:59:48.884Z] =================================================================================================================== 00:35:14.572 [2024-11-19T22:59:48.884Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.572 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 334785 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=335204 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 335204 /var/tmp/bperf.sock 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 335204 ']' 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.831 23:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.831 [2024-11-19 23:59:49.004061] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:35:14.831 [2024-11-19 23:59:49.004171] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335204 ] 00:35:14.831 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:14.831 Zero copy mechanism will not be used. 00:35:14.831 [2024-11-19 23:59:49.081565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.831 [2024-11-19 23:59:49.131877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.088 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.088 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:15.088 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.088 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.345 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:15.345 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.345 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.345 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.345 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.345 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.603 nvme0n1 00:35:15.861 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:15.862 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.862 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.862 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.862 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:15.862 23:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:15.862 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:15.862 Zero copy mechanism will not be used. 00:35:15.862 Running I/O for 2 seconds... 00:35:15.862 [2024-11-19 23:59:50.069815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.069947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.069999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.076201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.076348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.076398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.081876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.082066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.082108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.088417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.088598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.088636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.094442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.094606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.094644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.100554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.100673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.100706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.106974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.107107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.107156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.113480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.113575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.113607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.119867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.119974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.120011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.126207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.126307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.126337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.132273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.132388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.132431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.138151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.138238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.138268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.144618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.144726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.144757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.151239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.151322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.151353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.156763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.156889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.156921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.162453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.162577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.162610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.862 [2024-11-19 23:59:50.168106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:15.862 [2024-11-19 23:59:50.168224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.862 [2024-11-19 23:59:50.168255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.173640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.173755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.173789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.179212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.179335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.179375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.185301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.185388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.185440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.191791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.191898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.191929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.198224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.198362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.198413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.203742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.203839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.203880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.209454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.209558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.209594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.215083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.215209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.215238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.221467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.221563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.221604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.228058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.228200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.228230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.234456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.234597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.234630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.240487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.240580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.240614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.246603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.246701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.246732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.252584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.252696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.252728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.258191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.258308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.258337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.263740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.263839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.263869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.269322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.269446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.269494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.274778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.274924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.274955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.280286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.280410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.280442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.285759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.122 [2024-11-19 23:59:50.285849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.122 [2024-11-19 23:59:50.285881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.122 [2024-11-19 23:59:50.291255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.291342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.291373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.297745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.297849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.297882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.303875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.303988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.304024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.310026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.310171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.310200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.316854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.316961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.316993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.323247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.323336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.323364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.329760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.329874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.329926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.335289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.335425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.335464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.340743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.340834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.340864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.346239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.346376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.346409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.351848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.351952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.351983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.357317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.357441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.357473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.363579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.363668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.363704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.369769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.369873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.369904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.375615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.375752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.375785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.381181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.381278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.381313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.386724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.386834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.386865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.392263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.392373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.392420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.397829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.397932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.397968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.403620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.403719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.403752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.409920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.410040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.410080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.416287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.416382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.416412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.422628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.422731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.422762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.123 [2024-11-19 23:59:50.429166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.123 [2024-11-19 23:59:50.429261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.123 [2024-11-19 23:59:50.429296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.383 [2024-11-19 23:59:50.435545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.383 [2024-11-19 23:59:50.435636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.383 [2024-11-19 23:59:50.435672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.383 [2024-11-19 23:59:50.441912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.383 [2024-11-19 23:59:50.442019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.383 [2024-11-19 23:59:50.442054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.383 [2024-11-19 23:59:50.448062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.383 [2024-11-19 23:59:50.448179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.383 [2024-11-19 23:59:50.448217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.383 [2024-11-19 23:59:50.454302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.383 [2024-11-19 23:59:50.454389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.383 [2024-11-19 23:59:50.454436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.383 [2024-11-19 23:59:50.460654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.383 [2024-11-19 23:59:50.460757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.383 [2024-11-19 23:59:50.460788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.383 [2024-11-19 23:59:50.467125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.383 [2024-11-19 23:59:50.467213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.383 [2024-11-19 23:59:50.467247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.383 [2024-11-19 23:59:50.473486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.383 [2024-11-19 23:59:50.473593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.383 [2024-11-19 23:59:50.473626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.383 [2024-11-19 23:59:50.479887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.383 [2024-11-19 23:59:50.479973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.383 [2024-11-19 23:59:50.480005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.383 [2024-11-19 23:59:50.485999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.383 [2024-11-19 23:59:50.486105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.383 [2024-11-19 23:59:50.486153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.383 [2024-11-19 23:59:50.491423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.383 [2024-11-19 23:59:50.491555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.383 [2024-11-19 23:59:50.491593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.383 [2024-11-19 23:59:50.497781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.383 [2024-11-19 23:59:50.497936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.383 [2024-11-19 23:59:50.497968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.383 [2024-11-19 23:59:50.504459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.504579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.504612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.510171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.510313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.510342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.515757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.515919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.515954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.521693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.521827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.521859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.528768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.528931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.528969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.535577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.535801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.535833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.542250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.542424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.542462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.548905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.549051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.549104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.556534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.556741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.556774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.563868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.564002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.564035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.569888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.569987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.570017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.575348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.575485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.575516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.581004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.581169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.581209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.586581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.586725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.586757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.592154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.592273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.592302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.598262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.598446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.598486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.604813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.605022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.605054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.611218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.611406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.611438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.617161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.617273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.617302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.623925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.624113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.624159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.630516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.630612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.384 [2024-11-19 23:59:50.630650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.384 [2024-11-19 23:59:50.638174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.384 [2024-11-19 23:59:50.638344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.385 [2024-11-19 23:59:50.638373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.385 [2024-11-19 23:59:50.645290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.385 [2024-11-19 23:59:50.645439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.385 [2024-11-19 23:59:50.645476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.385 [2024-11-19 23:59:50.653342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.385 [2024-11-19 23:59:50.653484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.385 [2024-11-19 23:59:50.653516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.385 [2024-11-19 23:59:50.660519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.385 [2024-11-19 23:59:50.660864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.385 [2024-11-19 23:59:50.660899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.385 [2024-11-19 23:59:50.667952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.385 [2024-11-19 23:59:50.668426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.385 [2024-11-19 23:59:50.668460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.385 [2024-11-19 23:59:50.675267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.385 [2024-11-19 23:59:50.675616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.385 [2024-11-19 23:59:50.675649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.385 [2024-11-19 23:59:50.682812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.385 [2024-11-19 23:59:50.683130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.385 [2024-11-19 23:59:50.683160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.385 [2024-11-19 23:59:50.690038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.385 [2024-11-19 23:59:50.690408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.385 [2024-11-19 23:59:50.690454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.645 [2024-11-19 23:59:50.697283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.645 [2024-11-19 23:59:50.697629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.645 [2024-11-19 23:59:50.697667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.645 [2024-11-19 23:59:50.704611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.645 [2024-11-19 23:59:50.705004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.645 [2024-11-19 23:59:50.705038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.645 [2024-11-19 23:59:50.711915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.645 [2024-11-19 23:59:50.712248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.645 [2024-11-19 23:59:50.712279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.645 [2024-11-19 23:59:50.719299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.645 [2024-11-19 23:59:50.719590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.645 [2024-11-19 23:59:50.719623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.645 [2024-11-19 23:59:50.725717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.645 [2024-11-19 23:59:50.726001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.645 [2024-11-19 23:59:50.726034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.645 [2024-11-19 23:59:50.731100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.645 [2024-11-19 23:59:50.731418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.645 [2024-11-19 23:59:50.731447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.645 [2024-11-19 23:59:50.736406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.645 [2024-11-19 23:59:50.736688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.645 [2024-11-19 23:59:50.736720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.645 [2024-11-19 23:59:50.741781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.645 [2024-11-19 23:59:50.742019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.645 [2024-11-19 23:59:50.742051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.645 [2024-11-19 23:59:50.746550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.645 [2024-11-19 23:59:50.746758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.645 [2024-11-19 23:59:50.746790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.645 [2024-11-19 23:59:50.751471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.751753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.751785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.756841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.757128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.757158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.762860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.763076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.763110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.769065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.769302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.769330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.775103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.775384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.775431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.781718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.782057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.782119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.788022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.788282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.788311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.794539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.794830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.794861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.800523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.800775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.800808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.805964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.806269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.806298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.811595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.811838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.811876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.817203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.817438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.817470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.822885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.823157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.823195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.828369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.828592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.828634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.833552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.833785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.833817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.839158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.839389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.839422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.844724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.844954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.844996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.850546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.850798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.850830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.855983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.856278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.856308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.861803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.862116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.862167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.867460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.867700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.867732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.873057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.646 [2024-11-19 23:59:50.873292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.646 [2024-11-19 23:59:50.873337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.646 [2024-11-19 23:59:50.878714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.878997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.879029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.884350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.884740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.884771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.889988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.890318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.890351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.895405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.895694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.895726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.900753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.900943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.900981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.906478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.906715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.906747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.912232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.912547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.912595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.917672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.917890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.917918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.923329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.923610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.923642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.928849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.929045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.929092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.934329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.934691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.934724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.939870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.940087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.940119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.945231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.945593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.945625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.647 [2024-11-19 23:59:50.950828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.647 [2024-11-19 23:59:50.951184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.647 [2024-11-19 23:59:50.951215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:50.956413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:50.956635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:50.956669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:50.961998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:50.962225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:50.962256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:50.967494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:50.967705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:50.967746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:50.972938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:50.973203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:50.973232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:50.978691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:50.978999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:50.979031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:50.984497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:50.984720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:50.984753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:50.990214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:50.990567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:50.990601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:50.995911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:50.996189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:50.996221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:51.001523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:51.001727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:51.001759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:51.007129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:51.007336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:51.007367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:51.012829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:51.013026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:51.013058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:51.018375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:51.018609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:51.018647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:51.023991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:51.024213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:51.024242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:51.029606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:51.029939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:51.029978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:51.035083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:51.035291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:51.035321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:51.040781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:51.040982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:51.041014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:51.046364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.907 [2024-11-19 23:59:51.046688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.907 [2024-11-19 23:59:51.046721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.907 [2024-11-19 23:59:51.051864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.052203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.052232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.057518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.057751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.057780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.063189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.063467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.063499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.908 5141.00 IOPS, 642.62 MiB/s [2024-11-19T22:59:51.220Z] [2024-11-19 23:59:51.070241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.070414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.070452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.075673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.075950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.075983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.080602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.080748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.080787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.085416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.085643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.085682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.090938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.091157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.091188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.096066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.096252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.096281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.100761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.100936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.100968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.105481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.105658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.105691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.110182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.110380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.110425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.114929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.115124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.115153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.119660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.119831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.119864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.124336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.124518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.124551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.129059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.129246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.129283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.133839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.134009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.134041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.138552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.138765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.138797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.143311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.143494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.143526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.148079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.148278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.148307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.152783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.152957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.152989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.157530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.157681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.157713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.162217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.162350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.162396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.166984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.167187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.167222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.171707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.171886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.171918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.176412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.176580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.908 [2024-11-19 23:59:51.176612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.908 [2024-11-19 23:59:51.181124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.908 [2024-11-19 23:59:51.181305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.909 [2024-11-19 23:59:51.181334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.909 [2024-11-19 23:59:51.185817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.909 [2024-11-19 23:59:51.185990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.909 [2024-11-19 23:59:51.186022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.909 [2024-11-19 23:59:51.190489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.909 [2024-11-19 23:59:51.190653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.909 [2024-11-19 23:59:51.190691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.909 [2024-11-19 23:59:51.195162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.909 [2024-11-19 23:59:51.195315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.909 [2024-11-19 23:59:51.195349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.909 [2024-11-19 23:59:51.199825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.909 [2024-11-19 23:59:51.200000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.909 [2024-11-19 23:59:51.200032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.909 [2024-11-19 23:59:51.204505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.909 [2024-11-19 23:59:51.204700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.909 [2024-11-19 23:59:51.204732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.909 [2024-11-19 23:59:51.209151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.909 [2024-11-19 23:59:51.209310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.909 [2024-11-19 23:59:51.209339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.909 [2024-11-19 23:59:51.213816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:16.909 [2024-11-19 23:59:51.213993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.909 [2024-11-19 23:59:51.214034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.168 [2024-11-19 23:59:51.218503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.168 [2024-11-19 23:59:51.218679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.168 [2024-11-19 23:59:51.218713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.168 [2024-11-19 23:59:51.223188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.168 [2024-11-19 23:59:51.223343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.223374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.227871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.228043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.228085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.232597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.232781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.232821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.237285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.237465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.237498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.241970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.242165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.242194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.246689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.246850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.246882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.251536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.251765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.251796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.256582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.256770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.256803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.261455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.261620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.261659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.266234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.266430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.266463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.271286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.271468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.271500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.276403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.276586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.276616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.281801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.281958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.281987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.287067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.287253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.287282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.292440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.292602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.292635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.297653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.297828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.297861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.302836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.303003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.303035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.308206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.308347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.308377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.313423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.313601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.313633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.318801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.318985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.319018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.323845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.324012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.324041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.329235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.329386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.329421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.334569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.334736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.334783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.339735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.339920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.339953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.344952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.345129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.345180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.350086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.350237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.350266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.355455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.169 [2024-11-19 23:59:51.355614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.169 [2024-11-19 23:59:51.355643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.169 [2024-11-19 23:59:51.360669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.360837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.360870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.366029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.366220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.366261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.371223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.371393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.371426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.376987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.377236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.377266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.383471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.383723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.383752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.388887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.389194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.389226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.394541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.394699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.394732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.399894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.400145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.400180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.405347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.405579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.405619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.410838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.411065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.411133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.416440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.416734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.416768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.422167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.422349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.422384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.427676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.427928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.427961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.433303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.433554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.433586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.438891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.439114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.439171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.444460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.444717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.444750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.450012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.450299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.450329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.455529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.455745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.455778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.460948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.461238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.461268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.466600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.466896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.466929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.472087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.472409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.472448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.170 [2024-11-19 23:59:51.477450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.170 [2024-11-19 23:59:51.477662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.170 [2024-11-19 23:59:51.477706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.430 [2024-11-19 23:59:51.482844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.430 [2024-11-19 23:59:51.483138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.430 [2024-11-19 23:59:51.483176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.430 [2024-11-19 23:59:51.488478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.430 [2024-11-19 23:59:51.488693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.430 [2024-11-19 23:59:51.488727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.430 [2024-11-19 23:59:51.494229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.430 [2024-11-19 23:59:51.494435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.430 [2024-11-19 23:59:51.494469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.430 [2024-11-19 23:59:51.499681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.430 [2024-11-19 23:59:51.499895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.430 [2024-11-19 23:59:51.499929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.430 [2024-11-19 23:59:51.505152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.430 [2024-11-19 23:59:51.505354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.430 [2024-11-19 23:59:51.505399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.430 [2024-11-19 23:59:51.510683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.430 [2024-11-19 23:59:51.510912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.430 [2024-11-19 23:59:51.510952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.430 [2024-11-19 23:59:51.516206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.430 [2024-11-19 23:59:51.516416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.430 [2024-11-19 23:59:51.516449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.430 [2024-11-19 23:59:51.521632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.430 [2024-11-19 23:59:51.521882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.430 [2024-11-19 23:59:51.521911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.430 [2024-11-19 23:59:51.527055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.430 [2024-11-19 23:59:51.527310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.430 [2024-11-19 23:59:51.527339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.430 [2024-11-19 23:59:51.532673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.430 [2024-11-19 23:59:51.532883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.430 [2024-11-19 23:59:51.532915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.430 [2024-11-19 23:59:51.538267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.430 [2024-11-19 23:59:51.538470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.430 [2024-11-19 23:59:51.538508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.430 [2024-11-19 23:59:51.543740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.430 [2024-11-19 23:59:51.543932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.430 [2024-11-19 23:59:51.543964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.549193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.549391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.549423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.554798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.555001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.555033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.560386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.560608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.560640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.565854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.566075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.566121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.571313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.571497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.571530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.576980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.577221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.577251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.582501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.582732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.582763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.588297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.588516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.588548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.593683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.593886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.593918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.599188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.599501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.599533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.604786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.605058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.605099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.610296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.610493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.610525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.616146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.616359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.616388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.621736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.621901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.621939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.627197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.627453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.627484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.632730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.632997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.633037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.638357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.638612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.638644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.643867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.644160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.644189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.649420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.649619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.649651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.654941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.655217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.655254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.660595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.660770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.660802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.666213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.666417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.666449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.671630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.671804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.671836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.676992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.677204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.677233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.682446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.682618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.682650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.687890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.688119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.688149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.431 [2024-11-19 23:59:51.693343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.431 [2024-11-19 23:59:51.693550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.431 [2024-11-19 23:59:51.693582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.432 [2024-11-19 23:59:51.698785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.432 [2024-11-19 23:59:51.698994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.432 [2024-11-19 23:59:51.699026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.432 [2024-11-19 23:59:51.704184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.432 [2024-11-19 23:59:51.704401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.432 [2024-11-19 23:59:51.704433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.432 [2024-11-19 23:59:51.709730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.432 [2024-11-19 23:59:51.709898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.432 [2024-11-19 23:59:51.709936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.432 [2024-11-19 23:59:51.715188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.432 [2024-11-19 23:59:51.715367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.432 [2024-11-19 23:59:51.715399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.432 [2024-11-19 23:59:51.720522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.432 [2024-11-19 23:59:51.720685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.432 [2024-11-19 23:59:51.720714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.432 [2024-11-19 23:59:51.725801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.432 [2024-11-19 23:59:51.726028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.432 [2024-11-19 23:59:51.726056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.432 [2024-11-19 23:59:51.731017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.432 [2024-11-19 23:59:51.731278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.432 [2024-11-19 23:59:51.731308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.432 [2024-11-19 23:59:51.736329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.432 [2024-11-19 23:59:51.736662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.432 [2024-11-19 23:59:51.736695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.742102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.742327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.742358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.747645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.747897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.747931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.752945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.753147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.753177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.758443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.758688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.758721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.764218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.764406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.764438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.769564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.769729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.769767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.775287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.775537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.775569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.780857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.781092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.781144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.786329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.786562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.786594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.791898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.792161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.792191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.797389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.797609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.797656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.802804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.803022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.803061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.808204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.808338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.808367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.813658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.813811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.813843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.819053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.819323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.819355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.824461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.824639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.824672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.829777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.692 [2024-11-19 23:59:51.829967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.692 [2024-11-19 23:59:51.830000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.692 [2024-11-19 23:59:51.835087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.835273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.835306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.840508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.840736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.840768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.845874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.846127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.846163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.851233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.851425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.851457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.856793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.857000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.857040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.862388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.862604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.862637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.867739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.867934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.867966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.873206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.873373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.873420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.878620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.878857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.878889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.883955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.884210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.884240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.889318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.889547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.889579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.894839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.895048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.895088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.900293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.900441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.900472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.905672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.905911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.905942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.911185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.911344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.911389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.916714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.916970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.917002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.922220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.922422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.922455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.927721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.927870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.927901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.933418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.933575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.933607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.938952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.939203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.939239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.944551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.944778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.944811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.949830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.950044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.950084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.955339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.955577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.955616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.960784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.961027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.961059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.966243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.966524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.966555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.971589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.971849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.971881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.977096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.977274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.693 [2024-11-19 23:59:51.977309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.693 [2024-11-19 23:59:51.982679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.693 [2024-11-19 23:59:51.982866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-11-19 23:59:51.982898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.694 [2024-11-19 23:59:51.988127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.694 [2024-11-19 23:59:51.988343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-11-19 23:59:51.988399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.694 [2024-11-19 23:59:51.993743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.694 [2024-11-19 23:59:51.993927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-11-19 23:59:51.993959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.694 [2024-11-19 23:59:51.999435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.694 [2024-11-19 23:59:51.999645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.694 [2024-11-19 23:59:51.999675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.952 [2024-11-19 23:59:52.005088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.952 [2024-11-19 23:59:52.005294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.952 [2024-11-19 23:59:52.005334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.952 [2024-11-19 23:59:52.010420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.952 [2024-11-19 23:59:52.010629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.952 [2024-11-19 23:59:52.010663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.952 [2024-11-19 23:59:52.015860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.952 [2024-11-19 23:59:52.016049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.952 [2024-11-19 23:59:52.016091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.952 [2024-11-19 23:59:52.021352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.952 [2024-11-19 23:59:52.021610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.952 [2024-11-19 23:59:52.021639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.952 [2024-11-19 23:59:52.026697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.952 [2024-11-19 23:59:52.026884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.952 [2024-11-19 23:59:52.026916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.952 [2024-11-19 23:59:52.032456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.952 [2024-11-19 23:59:52.032599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-11-19 23:59:52.032632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.953 [2024-11-19 23:59:52.038180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.953 [2024-11-19 23:59:52.038312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-11-19 23:59:52.038341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.953 [2024-11-19 23:59:52.043605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.953 [2024-11-19 23:59:52.043784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-11-19 23:59:52.043817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.953 [2024-11-19 23:59:52.049192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.953 [2024-11-19 23:59:52.049379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-11-19 23:59:52.049411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.953 [2024-11-19 23:59:52.054828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.953 [2024-11-19 23:59:52.054988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-11-19 23:59:52.055019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.953 [2024-11-19 23:59:52.060412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.953 [2024-11-19 23:59:52.060566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-11-19 23:59:52.060599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.953 [2024-11-19 23:59:52.066025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.953 [2024-11-19 23:59:52.066165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-11-19 23:59:52.066197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.953 5483.00 IOPS, 685.38 MiB/s [2024-11-19T22:59:52.265Z] [2024-11-19 23:59:52.072858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d5e7a0) with pdu=0x2000166ff3c8 00:35:17.953 [2024-11-19 23:59:52.073019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.953 [2024-11-19 23:59:52.073050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.953 00:35:17.953 Latency(us) 00:35:17.953 [2024-11-19T22:59:52.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.953 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:17.953 nvme0n1 : 2.00 5480.08 685.01 0.00 0.00 2911.43 2160.26 8058.50 00:35:17.953 [2024-11-19T22:59:52.265Z] =================================================================================================================== 00:35:17.953 [2024-11-19T22:59:52.265Z] Total : 5480.08 685.01 0.00 0.00 2911.43 2160.26 8058.50 00:35:17.953 { 00:35:17.953 "results": [ 00:35:17.953 { 00:35:17.953 "job": "nvme0n1", 00:35:17.953 "core_mask": "0x2", 00:35:17.953 "workload": "randwrite", 00:35:17.953 "status": "finished", 00:35:17.953 "queue_depth": 16, 00:35:17.953 "io_size": 131072, 00:35:17.953 "runtime": 2.003987, 00:35:17.953 "iops": 5480.075469551449, 00:35:17.953 "mibps": 685.0094336939311, 00:35:17.953 "io_failed": 0, 00:35:17.953 "io_timeout": 0, 00:35:17.953 "avg_latency_us": 2911.425870481664, 00:35:17.953 "min_latency_us": 2160.260740740741, 00:35:17.953 "max_latency_us": 8058.500740740741 00:35:17.953 } 00:35:17.953 ], 00:35:17.953 "core_count": 1 00:35:17.953 } 00:35:17.953 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:17.953 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:17.953 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:17.953 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:17.953 | .driver_specific 00:35:17.953 | .nvme_error 00:35:17.953 | .status_code 00:35:17.953 | .command_transient_transport_error' 00:35:18.211 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 355 > 0 )) 00:35:18.211 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 335204 00:35:18.211 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 335204 ']' 00:35:18.211 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 335204 00:35:18.211 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:18.211 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.211 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335204 00:35:18.211 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:18.211 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:18.211 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335204' 00:35:18.211 killing process with pid 335204 00:35:18.211 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 335204 00:35:18.211 Received shutdown signal, test time was about 2.000000 seconds 00:35:18.211 00:35:18.211 Latency(us) 00:35:18.211 [2024-11-19T22:59:52.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.211 [2024-11-19T22:59:52.523Z] =================================================================================================================== 00:35:18.211 [2024-11-19T22:59:52.523Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.211 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 335204 00:35:18.470 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 333932 00:35:18.470 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 333932 ']' 00:35:18.470 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 333932 00:35:18.470 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:18.470 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.470 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333932 00:35:18.470 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:18.470 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:18.470 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333932' 00:35:18.470 killing process with pid 333932 00:35:18.470 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 333932 00:35:18.470 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 333932 00:35:18.729 00:35:18.729 real 0m15.284s 00:35:18.729 user 0m30.385s 00:35:18.729 sys 0m4.500s 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.729 ************************************ 00:35:18.729 END TEST nvmf_digest_error 00:35:18.729 ************************************ 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:18.729 rmmod nvme_tcp 00:35:18.729 rmmod nvme_fabrics 00:35:18.729 rmmod nvme_keyring 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 333932 ']' 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 333932 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 333932 ']' 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 333932 00:35:18.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (333932) - No such process 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 333932 is not found' 00:35:18.729 Process with pid 333932 is not found 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:18.729 23:59:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.265 23:59:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.265 00:35:21.265 real 0m35.467s 00:35:21.265 user 1m2.805s 00:35:21.265 sys 0m10.358s 00:35:21.265 23:59:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.265 23:59:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:21.265 ************************************ 00:35:21.265 END TEST nvmf_digest 00:35:21.265 ************************************ 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.265 ************************************ 00:35:21.265 START TEST nvmf_bdevperf 00:35:21.265 ************************************ 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:21.265 * Looking for test storage... 00:35:21.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.265 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:21.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.266 --rc genhtml_branch_coverage=1 00:35:21.266 --rc genhtml_function_coverage=1 00:35:21.266 --rc genhtml_legend=1 00:35:21.266 --rc geninfo_all_blocks=1 00:35:21.266 --rc geninfo_unexecuted_blocks=1 00:35:21.266 00:35:21.266 ' 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:21.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.266 --rc genhtml_branch_coverage=1 00:35:21.266 --rc genhtml_function_coverage=1 00:35:21.266 --rc genhtml_legend=1 00:35:21.266 --rc geninfo_all_blocks=1 00:35:21.266 --rc geninfo_unexecuted_blocks=1 00:35:21.266 00:35:21.266 ' 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:21.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.266 --rc genhtml_branch_coverage=1 00:35:21.266 --rc genhtml_function_coverage=1 00:35:21.266 --rc genhtml_legend=1 00:35:21.266 --rc geninfo_all_blocks=1 00:35:21.266 --rc geninfo_unexecuted_blocks=1 00:35:21.266 00:35:21.266 ' 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:21.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.266 --rc genhtml_branch_coverage=1 00:35:21.266 --rc genhtml_function_coverage=1 00:35:21.266 --rc genhtml_legend=1 00:35:21.266 --rc geninfo_all_blocks=1 00:35:21.266 --rc geninfo_unexecuted_blocks=1 00:35:21.266 00:35:21.266 ' 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:21.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:21.266 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:21.267 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:21.267 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.267 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:21.267 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:21.267 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:21.267 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.267 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.267 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.267 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:21.267 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:21.267 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:21.267 23:59:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:23.171 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:23.171 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:23.171 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:23.171 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:23.171 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:23.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:23.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:35:23.172 00:35:23.172 --- 10.0.0.2 ping statistics --- 00:35:23.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.172 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:23.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:23.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:35:23.172 00:35:23.172 --- 10.0.0.1 ping statistics --- 00:35:23.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.172 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:23.172 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:23.430 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:23.430 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:23.430 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:23.430 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:23.430 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.430 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=337661 00:35:23.430 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:23.430 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 337661 00:35:23.430 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 337661 ']' 00:35:23.430 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.431 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:23.431 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.431 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:23.431 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.431 [2024-11-19 23:59:57.548789] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:35:23.431 [2024-11-19 23:59:57.548875] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:23.431 [2024-11-19 23:59:57.628284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:23.431 [2024-11-19 23:59:57.679186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:23.431 [2024-11-19 23:59:57.679254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:23.431 [2024-11-19 23:59:57.679270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:23.431 [2024-11-19 23:59:57.679284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:23.431 [2024-11-19 23:59:57.679295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:23.431 [2024-11-19 23:59:57.680921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:23.431 [2024-11-19 23:59:57.680984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.431 [2024-11-19 23:59:57.680980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.689 [2024-11-19 23:59:57.824991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.689 Malloc0 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.689 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.690 [2024-11-19 23:59:57.892242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:23.690 { 00:35:23.690 "params": { 00:35:23.690 "name": "Nvme$subsystem", 00:35:23.690 "trtype": "$TEST_TRANSPORT", 00:35:23.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.690 "adrfam": "ipv4", 00:35:23.690 "trsvcid": "$NVMF_PORT", 00:35:23.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.690 "hdgst": ${hdgst:-false}, 00:35:23.690 "ddgst": ${ddgst:-false} 00:35:23.690 }, 00:35:23.690 "method": "bdev_nvme_attach_controller" 00:35:23.690 } 00:35:23.690 EOF 00:35:23.690 )") 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:23.690 23:59:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:23.690 "params": { 00:35:23.690 "name": "Nvme1", 00:35:23.690 "trtype": "tcp", 00:35:23.690 "traddr": "10.0.0.2", 00:35:23.690 "adrfam": "ipv4", 00:35:23.690 "trsvcid": "4420", 00:35:23.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:23.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:23.690 "hdgst": false, 00:35:23.690 "ddgst": false 00:35:23.690 }, 00:35:23.690 "method": "bdev_nvme_attach_controller" 00:35:23.690 }' 00:35:23.690 [2024-11-19 23:59:57.940865] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:35:23.690 [2024-11-19 23:59:57.940935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337690 ] 00:35:23.948 [2024-11-19 23:59:58.009043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.948 [2024-11-19 23:59:58.057341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.948 Running I/O for 1 seconds... 00:35:25.323 8510.00 IOPS, 33.24 MiB/s 00:35:25.323 Latency(us) 00:35:25.323 [2024-11-19T22:59:59.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.323 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:25.323 Verification LBA range: start 0x0 length 0x4000 00:35:25.323 Nvme1n1 : 1.02 8509.23 33.24 0.00 0.00 14978.65 3252.53 15534.46 00:35:25.323 [2024-11-19T22:59:59.635Z] =================================================================================================================== 00:35:25.323 [2024-11-19T22:59:59.635Z] Total : 8509.23 33.24 0.00 0.00 14978.65 3252.53 15534.46 00:35:25.323 23:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=337948 00:35:25.323 23:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:25.323 23:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:25.323 23:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:25.323 23:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:25.323 23:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:25.323 23:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:25.323 23:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:25.323 { 00:35:25.323 "params": { 00:35:25.323 "name": "Nvme$subsystem", 00:35:25.323 "trtype": "$TEST_TRANSPORT", 00:35:25.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.323 "adrfam": "ipv4", 00:35:25.323 "trsvcid": "$NVMF_PORT", 00:35:25.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.323 "hdgst": ${hdgst:-false}, 00:35:25.323 "ddgst": ${ddgst:-false} 00:35:25.323 }, 00:35:25.323 "method": "bdev_nvme_attach_controller" 00:35:25.323 } 00:35:25.323 EOF 00:35:25.323 )") 00:35:25.323 23:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:25.324 23:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:25.324 23:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:25.324 23:59:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:25.324 "params": { 00:35:25.324 "name": "Nvme1", 00:35:25.324 "trtype": "tcp", 00:35:25.324 "traddr": "10.0.0.2", 00:35:25.324 "adrfam": "ipv4", 00:35:25.324 "trsvcid": "4420", 00:35:25.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:25.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:25.324 "hdgst": false, 00:35:25.324 "ddgst": false 00:35:25.324 }, 00:35:25.324 "method": "bdev_nvme_attach_controller" 00:35:25.324 }' 00:35:25.324 [2024-11-19 23:59:59.491553] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:35:25.324 [2024-11-19 23:59:59.491623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337948 ] 00:35:25.324 [2024-11-19 23:59:59.558681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.324 [2024-11-19 23:59:59.604559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.582 Running I/O for 15 seconds... 00:35:27.895 8397.00 IOPS, 32.80 MiB/s [2024-11-19T23:00:02.528Z] 8447.00 IOPS, 33.00 MiB/s [2024-11-19T23:00:02.528Z] 00:00:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 337661 00:35:28.216 00:00:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:28.216 [2024-11-20 00:00:02.458027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.216 [2024-11-20 00:00:02.458670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.216 [2024-11-20 00:00:02.458686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.458701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.458718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.458738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.458755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.458770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.458787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.458802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.458819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.458834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.458850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.458865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.458882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.458906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.458923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.458937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.458954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.458970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.458986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.459974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.459993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.217 [2024-11-20 00:00:02.460910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.217 [2024-11-20 00:00:02.460926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.460940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.460957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.460972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.460988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:28.218 [2024-11-20 00:00:02.461799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.461831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.461862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.461898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.461929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.461960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.461976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.461991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.462022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.462067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.462109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.462156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.462184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.462212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.462239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.462268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.462295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.218 [2024-11-20 00:00:02.462323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd21f20 is same with the state(6) to be set 00:35:28.218 [2024-11-20 00:00:02.462375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:28.218 [2024-11-20 00:00:02.462387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:28.218 [2024-11-20 00:00:02.462406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46496 len:8 PRP1 0x0 PRP2 0x0 00:35:28.218 [2024-11-20 00:00:02.462421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:28.218 [2024-11-20 00:00:02.462577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:28.218 [2024-11-20 00:00:02.462607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:28.218 [2024-11-20 00:00:02.462643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:28.218 [2024-11-20 00:00:02.462672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.218 [2024-11-20 00:00:02.462685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.218 [2024-11-20 00:00:02.466346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.218 [2024-11-20 00:00:02.466412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.218 [2024-11-20 00:00:02.467096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.218 [2024-11-20 00:00:02.467129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.218 [2024-11-20 00:00:02.467147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.218 [2024-11-20 00:00:02.467387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.218 [2024-11-20 00:00:02.467630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.218 [2024-11-20 00:00:02.467652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.218 [2024-11-20 00:00:02.467669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.219 [2024-11-20 00:00:02.467686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.219 [2024-11-20 00:00:02.480544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.219 [2024-11-20 00:00:02.480901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.219 [2024-11-20 00:00:02.480934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.219 [2024-11-20 00:00:02.480952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.219 [2024-11-20 00:00:02.481211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.219 [2024-11-20 00:00:02.481456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.219 [2024-11-20 00:00:02.481480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.219 [2024-11-20 00:00:02.481495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.219 [2024-11-20 00:00:02.481510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.219 [2024-11-20 00:00:02.494577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.219 [2024-11-20 00:00:02.495000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.219 [2024-11-20 00:00:02.495032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.219 [2024-11-20 00:00:02.495050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.219 [2024-11-20 00:00:02.495306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.219 [2024-11-20 00:00:02.495550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.219 [2024-11-20 00:00:02.495573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.219 [2024-11-20 00:00:02.495588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.219 [2024-11-20 00:00:02.495602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.219 [2024-11-20 00:00:02.508486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.219 [2024-11-20 00:00:02.508831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.219 [2024-11-20 00:00:02.508863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.219 [2024-11-20 00:00:02.508881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.219 [2024-11-20 00:00:02.509132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.219 [2024-11-20 00:00:02.509376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.219 [2024-11-20 00:00:02.509399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.219 [2024-11-20 00:00:02.509414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.219 [2024-11-20 00:00:02.509428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.492 [2024-11-20 00:00:02.522516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.492 [2024-11-20 00:00:02.522916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.522947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.522965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.523212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.523457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.523480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.523495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.523509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.536400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.536796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.536827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.536845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.537095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.537339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.537368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.537384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.537399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.550277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.550751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.550781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.550799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.551038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.551290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.551314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.551330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.551344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.564217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.564625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.564656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.564673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.564912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.565169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.565194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.565209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.565222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.578084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.578487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.578518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.578535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.578773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.579017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.579040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.579055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.579080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.591944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.592331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.592364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.592382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.592620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.592863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.592886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.592901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.592915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.605785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.606172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.606204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.606221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.606459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.606702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.606725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.606740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.606754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.619817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.620189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.620220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.620238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.620476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.620719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.620742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.620757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.620771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.633846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.634218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.634255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.634274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.634513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.634756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.634779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.634794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.634809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.647685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.648093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.648125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.648143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.648381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.648625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.648649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.648663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.648677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.661546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.661935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.661966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.661985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.662233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.662478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.662502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.662517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.662531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.675422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.675802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.675834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.675852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.676107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.676362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.676386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.676401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.676415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.689306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.689707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.689738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.689756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.689994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.690250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.690274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.690289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.690303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.703181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.703551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.703582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.703599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.703838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.704094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.704118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.704133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.704147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.717227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.717622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.717653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.717670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.493 [2024-11-20 00:00:02.717908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.493 [2024-11-20 00:00:02.718165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.493 [2024-11-20 00:00:02.718191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.493 [2024-11-20 00:00:02.718213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.493 [2024-11-20 00:00:02.718228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.493 [2024-11-20 00:00:02.731099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.493 [2024-11-20 00:00:02.731471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.493 [2024-11-20 00:00:02.731502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.493 [2024-11-20 00:00:02.731520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.494 [2024-11-20 00:00:02.731758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.494 [2024-11-20 00:00:02.732007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.494 [2024-11-20 00:00:02.732031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.494 [2024-11-20 00:00:02.732048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.494 [2024-11-20 00:00:02.732062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.494 [2024-11-20 00:00:02.744981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.494 [2024-11-20 00:00:02.745369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.494 [2024-11-20 00:00:02.745399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.494 [2024-11-20 00:00:02.745417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.494 [2024-11-20 00:00:02.745655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.494 [2024-11-20 00:00:02.745898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.494 [2024-11-20 00:00:02.745922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.494 [2024-11-20 00:00:02.745936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.494 [2024-11-20 00:00:02.745951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.494 [2024-11-20 00:00:02.758837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.494 [2024-11-20 00:00:02.759199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.494 [2024-11-20 00:00:02.759231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.494 [2024-11-20 00:00:02.759249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.494 [2024-11-20 00:00:02.759487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.494 [2024-11-20 00:00:02.759730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.494 [2024-11-20 00:00:02.759754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.494 [2024-11-20 00:00:02.759768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.494 [2024-11-20 00:00:02.759782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.494 [2024-11-20 00:00:02.772879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.494 [2024-11-20 00:00:02.773230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.494 [2024-11-20 00:00:02.773261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.494 [2024-11-20 00:00:02.773278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.494 [2024-11-20 00:00:02.773517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.494 [2024-11-20 00:00:02.773760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.494 [2024-11-20 00:00:02.773784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.494 [2024-11-20 00:00:02.773799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.494 [2024-11-20 00:00:02.773813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.494 [2024-11-20 00:00:02.786913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.494 [2024-11-20 00:00:02.787266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.494 [2024-11-20 00:00:02.787297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.494 [2024-11-20 00:00:02.787315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.494 [2024-11-20 00:00:02.787553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.494 [2024-11-20 00:00:02.787796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.494 [2024-11-20 00:00:02.787820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.494 [2024-11-20 00:00:02.787835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.494 [2024-11-20 00:00:02.787849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.494 [2024-11-20 00:00:02.800919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.494 [2024-11-20 00:00:02.801272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.494 [2024-11-20 00:00:02.801302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.494 [2024-11-20 00:00:02.801320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.494 [2024-11-20 00:00:02.801558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.494 [2024-11-20 00:00:02.801801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.494 [2024-11-20 00:00:02.801825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.494 [2024-11-20 00:00:02.801840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.494 [2024-11-20 00:00:02.801854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.752 [2024-11-20 00:00:02.814937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.752 [2024-11-20 00:00:02.815344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.752 [2024-11-20 00:00:02.815387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.752 [2024-11-20 00:00:02.815411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.752 [2024-11-20 00:00:02.815652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.752 [2024-11-20 00:00:02.815896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.752 [2024-11-20 00:00:02.815920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.752 [2024-11-20 00:00:02.815936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.752 [2024-11-20 00:00:02.815951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.752 [2024-11-20 00:00:02.828810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.752 [2024-11-20 00:00:02.829218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.752 [2024-11-20 00:00:02.829250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.752 [2024-11-20 00:00:02.829268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.752 [2024-11-20 00:00:02.829515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.752 [2024-11-20 00:00:02.829758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.752 [2024-11-20 00:00:02.829781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.752 [2024-11-20 00:00:02.829796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.752 [2024-11-20 00:00:02.829810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.752 7361.67 IOPS, 28.76 MiB/s [2024-11-19T23:00:03.064Z] [2024-11-20 00:00:02.844421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.752 [2024-11-20 00:00:02.844820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.752 [2024-11-20 00:00:02.844851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.752 [2024-11-20 00:00:02.844869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.752 [2024-11-20 00:00:02.845120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.752 [2024-11-20 00:00:02.845364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.752 [2024-11-20 00:00:02.845388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:02.845403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:02.845417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:02.858287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:02.858681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:02.858712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:02.858730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:02.858968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:02.859233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:02.859258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:02.859273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:02.859287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:02.872137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:02.872499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:02.872529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:02.872547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:02.872785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:02.873028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:02.873051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:02.873066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:02.873104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:02.886173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:02.886574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:02.886604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:02.886622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:02.886861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:02.887117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:02.887141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:02.887156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:02.887170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:02.900033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:02.900409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:02.900441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:02.900459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:02.900698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:02.900941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:02.900964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:02.900985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:02.901000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:02.913867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:02.914278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:02.914311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:02.914329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:02.914567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:02.914811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:02.914834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:02.914849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:02.914863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:02.927731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:02.928147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:02.928178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:02.928196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:02.928435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:02.928678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:02.928701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:02.928717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:02.928731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:02.941618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:02.941975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:02.942005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:02.942023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:02.942273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:02.942516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:02.942540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:02.942556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:02.942570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:02.955644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:02.956021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:02.956052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:02.956080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:02.956322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:02.956565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:02.956589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:02.956603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:02.956618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:02.969477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:02.969841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:02.969873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:02.969890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:02.970143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:02.970387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:02.970411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:02.970426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:02.970440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:02.983320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:02.983695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:02.983726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:02.983744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:02.983983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:02.984236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:02.984260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:02.984275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:02.984289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:02.997359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:02.997757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:02.997786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:02.997809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:02.998048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:02.998302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:02.998326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:02.998341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:02.998355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:03.011220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:03.011586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:03.011617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:03.011635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:03.011873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:03.012129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:03.012153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:03.012168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:03.012182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:03.025249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:03.025635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:03.025665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.753 [2024-11-20 00:00:03.025683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.753 [2024-11-20 00:00:03.025921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.753 [2024-11-20 00:00:03.026178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.753 [2024-11-20 00:00:03.026202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.753 [2024-11-20 00:00:03.026217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.753 [2024-11-20 00:00:03.026231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.753 [2024-11-20 00:00:03.039097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.753 [2024-11-20 00:00:03.039514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.753 [2024-11-20 00:00:03.039544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.754 [2024-11-20 00:00:03.039562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.754 [2024-11-20 00:00:03.039800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.754 [2024-11-20 00:00:03.040049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.754 [2024-11-20 00:00:03.040083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.754 [2024-11-20 00:00:03.040109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.754 [2024-11-20 00:00:03.040123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:28.754 [2024-11-20 00:00:03.052986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:28.754 [2024-11-20 00:00:03.053386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.754 [2024-11-20 00:00:03.053417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:28.754 [2024-11-20 00:00:03.053435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:28.754 [2024-11-20 00:00:03.053673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:28.754 [2024-11-20 00:00:03.053916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:28.754 [2024-11-20 00:00:03.053940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:28.754 [2024-11-20 00:00:03.053956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:28.754 [2024-11-20 00:00:03.053970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.010 [2024-11-20 00:00:03.066832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.010 [2024-11-20 00:00:03.067228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.010 [2024-11-20 00:00:03.067258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.010 [2024-11-20 00:00:03.067276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.010 [2024-11-20 00:00:03.067514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.010 [2024-11-20 00:00:03.067766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.010 [2024-11-20 00:00:03.067789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.010 [2024-11-20 00:00:03.067805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.010 [2024-11-20 00:00:03.067819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.010 [2024-11-20 00:00:03.080676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.010 [2024-11-20 00:00:03.081154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.010 [2024-11-20 00:00:03.081186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.010 [2024-11-20 00:00:03.081204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.010 [2024-11-20 00:00:03.081442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.010 [2024-11-20 00:00:03.081685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.010 [2024-11-20 00:00:03.081708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.010 [2024-11-20 00:00:03.081731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.010 [2024-11-20 00:00:03.081746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.010 [2024-11-20 00:00:03.094610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.010 [2024-11-20 00:00:03.095004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.010 [2024-11-20 00:00:03.095035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.010 [2024-11-20 00:00:03.095052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.010 [2024-11-20 00:00:03.095301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.010 [2024-11-20 00:00:03.095544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.010 [2024-11-20 00:00:03.095568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.010 [2024-11-20 00:00:03.095582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.095596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.108456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.108857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.108887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.108905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.109156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.109400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.109423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.109438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.109452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.122312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.122697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.122728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.122745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.122983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.123239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.123263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.123278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.123292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.136161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.136559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.136589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.136606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.136844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.137100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.137124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.137139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.137153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.150022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.150411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.150442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.150460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.150698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.150942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.150965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.150980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.150994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.163859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.164255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.164286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.164304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.164542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.164785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.164808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.164823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.164837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.177705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.178095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.178127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.178150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.178389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.178633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.178656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.178671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.178685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.191543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.191934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.191964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.191982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.192232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.192475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.192499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.192514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.192527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.205416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.205786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.205816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.205834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.206084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.206328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.206351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.206366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.206380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.219445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.219815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.219846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.219864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.220113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.220363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.220387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.220402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.220417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.233305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.233694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.233725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.233742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.233980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.234234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.234258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.234273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.234286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.247162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.247558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.247589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.247607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.247845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.248103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.248127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.248142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.248156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.261006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.261379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.261411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.261429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.261667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.261911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.261934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.261955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.261970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.275032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.275436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.275467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.275485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.275723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.275966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.275990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.276005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.276019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.288872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.289254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.289285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.289304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.289543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.289786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.289810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.289825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.289838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.302724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.303122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.303154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.303173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.303412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.303656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.303681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.303696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.303711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.011 [2024-11-20 00:00:03.316579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.011 [2024-11-20 00:00:03.316984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.011 [2024-11-20 00:00:03.317015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.011 [2024-11-20 00:00:03.317033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.011 [2024-11-20 00:00:03.317281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.011 [2024-11-20 00:00:03.317525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.011 [2024-11-20 00:00:03.317549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.011 [2024-11-20 00:00:03.317564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.011 [2024-11-20 00:00:03.317578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.268 [2024-11-20 00:00:03.330448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.268 [2024-11-20 00:00:03.330812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.268 [2024-11-20 00:00:03.330842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.268 [2024-11-20 00:00:03.330860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.268 [2024-11-20 00:00:03.331110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.268 [2024-11-20 00:00:03.331353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.268 [2024-11-20 00:00:03.331377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.268 [2024-11-20 00:00:03.331392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.268 [2024-11-20 00:00:03.331406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.268 [2024-11-20 00:00:03.344490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.268 [2024-11-20 00:00:03.344891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.268 [2024-11-20 00:00:03.344921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.268 [2024-11-20 00:00:03.344939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.268 [2024-11-20 00:00:03.345190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.268 [2024-11-20 00:00:03.345434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.268 [2024-11-20 00:00:03.345457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.268 [2024-11-20 00:00:03.345472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.268 [2024-11-20 00:00:03.345486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.268 [2024-11-20 00:00:03.358340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.268 [2024-11-20 00:00:03.358739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.268 [2024-11-20 00:00:03.358770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.268 [2024-11-20 00:00:03.358793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.268 [2024-11-20 00:00:03.359032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.268 [2024-11-20 00:00:03.359288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.268 [2024-11-20 00:00:03.359312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.268 [2024-11-20 00:00:03.359327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.268 [2024-11-20 00:00:03.359340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.268 [2024-11-20 00:00:03.372228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.268 [2024-11-20 00:00:03.372612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.268 [2024-11-20 00:00:03.372643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.268 [2024-11-20 00:00:03.372661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.268 [2024-11-20 00:00:03.372898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.268 [2024-11-20 00:00:03.373157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.268 [2024-11-20 00:00:03.373182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.268 [2024-11-20 00:00:03.373196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.268 [2024-11-20 00:00:03.373211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.268 [2024-11-20 00:00:03.386091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.268 [2024-11-20 00:00:03.386481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.268 [2024-11-20 00:00:03.386512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.268 [2024-11-20 00:00:03.386530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.268 [2024-11-20 00:00:03.386768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.268 [2024-11-20 00:00:03.387011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.268 [2024-11-20 00:00:03.387035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.268 [2024-11-20 00:00:03.387050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.268 [2024-11-20 00:00:03.387064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.268 [2024-11-20 00:00:03.399941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.268 [2024-11-20 00:00:03.400339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.268 [2024-11-20 00:00:03.400370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.268 [2024-11-20 00:00:03.400387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.268 [2024-11-20 00:00:03.400626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.268 [2024-11-20 00:00:03.400875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.268 [2024-11-20 00:00:03.400898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.268 [2024-11-20 00:00:03.400914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.400927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.269 [2024-11-20 00:00:03.413801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.269 [2024-11-20 00:00:03.414199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.269 [2024-11-20 00:00:03.414230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.269 [2024-11-20 00:00:03.414248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.269 [2024-11-20 00:00:03.414486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.269 [2024-11-20 00:00:03.414729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.269 [2024-11-20 00:00:03.414753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.269 [2024-11-20 00:00:03.414768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.414783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.269 [2024-11-20 00:00:03.427655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.269 [2024-11-20 00:00:03.428042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.269 [2024-11-20 00:00:03.428082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.269 [2024-11-20 00:00:03.428102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.269 [2024-11-20 00:00:03.428341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.269 [2024-11-20 00:00:03.428584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.269 [2024-11-20 00:00:03.428607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.269 [2024-11-20 00:00:03.428622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.428636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.269 [2024-11-20 00:00:03.441514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.269 [2024-11-20 00:00:03.441893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.269 [2024-11-20 00:00:03.441924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.269 [2024-11-20 00:00:03.441942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.269 [2024-11-20 00:00:03.442192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.269 [2024-11-20 00:00:03.442436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.269 [2024-11-20 00:00:03.442459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.269 [2024-11-20 00:00:03.442474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.442494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.269 [2024-11-20 00:00:03.455548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.269 [2024-11-20 00:00:03.455941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.269 [2024-11-20 00:00:03.455972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.269 [2024-11-20 00:00:03.455990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.269 [2024-11-20 00:00:03.456243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.269 [2024-11-20 00:00:03.456486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.269 [2024-11-20 00:00:03.456510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.269 [2024-11-20 00:00:03.456525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.456539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.269 [2024-11-20 00:00:03.469409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.269 [2024-11-20 00:00:03.469810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.269 [2024-11-20 00:00:03.469840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.269 [2024-11-20 00:00:03.469858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.269 [2024-11-20 00:00:03.470112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.269 [2024-11-20 00:00:03.470356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.269 [2024-11-20 00:00:03.470380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.269 [2024-11-20 00:00:03.470396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.470410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.269 [2024-11-20 00:00:03.483500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.269 [2024-11-20 00:00:03.483893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.269 [2024-11-20 00:00:03.483924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.269 [2024-11-20 00:00:03.483943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.269 [2024-11-20 00:00:03.484193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.269 [2024-11-20 00:00:03.484437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.269 [2024-11-20 00:00:03.484460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.269 [2024-11-20 00:00:03.484475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.484489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.269 [2024-11-20 00:00:03.497354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.269 [2024-11-20 00:00:03.497731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.269 [2024-11-20 00:00:03.497762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.269 [2024-11-20 00:00:03.497781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.269 [2024-11-20 00:00:03.498019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.269 [2024-11-20 00:00:03.498272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.269 [2024-11-20 00:00:03.498297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.269 [2024-11-20 00:00:03.498312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.498326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.269 [2024-11-20 00:00:03.511396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.269 [2024-11-20 00:00:03.511867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.269 [2024-11-20 00:00:03.511898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.269 [2024-11-20 00:00:03.511915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.269 [2024-11-20 00:00:03.512166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.269 [2024-11-20 00:00:03.512409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.269 [2024-11-20 00:00:03.512433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.269 [2024-11-20 00:00:03.512448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.512462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.269 [2024-11-20 00:00:03.525321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.269 [2024-11-20 00:00:03.525695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.269 [2024-11-20 00:00:03.525725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.269 [2024-11-20 00:00:03.525743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.269 [2024-11-20 00:00:03.525981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.269 [2024-11-20 00:00:03.526239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.269 [2024-11-20 00:00:03.526263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.269 [2024-11-20 00:00:03.526279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.526293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.269 [2024-11-20 00:00:03.539172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.269 [2024-11-20 00:00:03.539550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.269 [2024-11-20 00:00:03.539581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.269 [2024-11-20 00:00:03.539599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.269 [2024-11-20 00:00:03.539843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.269 [2024-11-20 00:00:03.540102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.269 [2024-11-20 00:00:03.540126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.269 [2024-11-20 00:00:03.540141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.540156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.269 [2024-11-20 00:00:03.553048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.269 [2024-11-20 00:00:03.553456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.269 [2024-11-20 00:00:03.553487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.269 [2024-11-20 00:00:03.553505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.269 [2024-11-20 00:00:03.553744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.269 [2024-11-20 00:00:03.553987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.269 [2024-11-20 00:00:03.554010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.269 [2024-11-20 00:00:03.554025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.554039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.269 [2024-11-20 00:00:03.566959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.269 [2024-11-20 00:00:03.567346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.269 [2024-11-20 00:00:03.567378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.269 [2024-11-20 00:00:03.567396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.269 [2024-11-20 00:00:03.567634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.269 [2024-11-20 00:00:03.567878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.269 [2024-11-20 00:00:03.567901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.269 [2024-11-20 00:00:03.567916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.269 [2024-11-20 00:00:03.567930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-20 00:00:03.580820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-20 00:00:03.581218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-20 00:00:03.581250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-20 00:00:03.581268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.528 [2024-11-20 00:00:03.581507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.528 [2024-11-20 00:00:03.581751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-20 00:00:03.581780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-20 00:00:03.581797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-20 00:00:03.581811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-20 00:00:03.594699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-20 00:00:03.595066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-20 00:00:03.595107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-20 00:00:03.595125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.528 [2024-11-20 00:00:03.595364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.528 [2024-11-20 00:00:03.595607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-20 00:00:03.595631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-20 00:00:03.595646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-20 00:00:03.595660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-20 00:00:03.608551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-20 00:00:03.608953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-20 00:00:03.608990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-20 00:00:03.609008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.528 [2024-11-20 00:00:03.609259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.528 [2024-11-20 00:00:03.609503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-20 00:00:03.609526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-20 00:00:03.609542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-20 00:00:03.609556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-20 00:00:03.622455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-20 00:00:03.622843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-20 00:00:03.622873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-20 00:00:03.622891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.528 [2024-11-20 00:00:03.623143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.528 [2024-11-20 00:00:03.623391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-20 00:00:03.623415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-20 00:00:03.623429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-20 00:00:03.623449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-20 00:00:03.636336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.528 [2024-11-20 00:00:03.636731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.528 [2024-11-20 00:00:03.636762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.528 [2024-11-20 00:00:03.636779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.528 [2024-11-20 00:00:03.637018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.528 [2024-11-20 00:00:03.637276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.528 [2024-11-20 00:00:03.637301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.528 [2024-11-20 00:00:03.637316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.528 [2024-11-20 00:00:03.637330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.528 [2024-11-20 00:00:03.650237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-20 00:00:03.650614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-20 00:00:03.650646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-20 00:00:03.650663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.529 [2024-11-20 00:00:03.650901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.529 [2024-11-20 00:00:03.651160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-20 00:00:03.651185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-20 00:00:03.651200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-20 00:00:03.651214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-20 00:00:03.664094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-20 00:00:03.664496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-20 00:00:03.664526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-20 00:00:03.664544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.529 [2024-11-20 00:00:03.664782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.529 [2024-11-20 00:00:03.665025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-20 00:00:03.665048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-20 00:00:03.665064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-20 00:00:03.665094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-20 00:00:03.678005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-20 00:00:03.678481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-20 00:00:03.678512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-20 00:00:03.678531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.529 [2024-11-20 00:00:03.678769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.529 [2024-11-20 00:00:03.679012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-20 00:00:03.679036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-20 00:00:03.679050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-20 00:00:03.679065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-20 00:00:03.691982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-20 00:00:03.692439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-20 00:00:03.692470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-20 00:00:03.692488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.529 [2024-11-20 00:00:03.692727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.529 [2024-11-20 00:00:03.692969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-20 00:00:03.692993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-20 00:00:03.693008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-20 00:00:03.693022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-20 00:00:03.705920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-20 00:00:03.706296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-20 00:00:03.706328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-20 00:00:03.706346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.529 [2024-11-20 00:00:03.706584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.529 [2024-11-20 00:00:03.706828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-20 00:00:03.706852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-20 00:00:03.706867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-20 00:00:03.706881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-20 00:00:03.719763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-20 00:00:03.720141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-20 00:00:03.720173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-20 00:00:03.720191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.529 [2024-11-20 00:00:03.720435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.529 [2024-11-20 00:00:03.720679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-20 00:00:03.720703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-20 00:00:03.720719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-20 00:00:03.720734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-20 00:00:03.733630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-20 00:00:03.733982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-20 00:00:03.734013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-20 00:00:03.734031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.529 [2024-11-20 00:00:03.734279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.529 [2024-11-20 00:00:03.734523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-20 00:00:03.734547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-20 00:00:03.734563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-20 00:00:03.734577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-20 00:00:03.747685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.529 [2024-11-20 00:00:03.748084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.529 [2024-11-20 00:00:03.748115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.529 [2024-11-20 00:00:03.748133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.529 [2024-11-20 00:00:03.748371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.529 [2024-11-20 00:00:03.748614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.529 [2024-11-20 00:00:03.748638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.529 [2024-11-20 00:00:03.748653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.529 [2024-11-20 00:00:03.748666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.529 [2024-11-20 00:00:03.761550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.530 [2024-11-20 00:00:03.762008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.530 [2024-11-20 00:00:03.762038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.530 [2024-11-20 00:00:03.762056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.530 [2024-11-20 00:00:03.762306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.530 [2024-11-20 00:00:03.762550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.530 [2024-11-20 00:00:03.762582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.530 [2024-11-20 00:00:03.762598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.530 [2024-11-20 00:00:03.762612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.530 [2024-11-20 00:00:03.775508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.530 [2024-11-20 00:00:03.775913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.530 [2024-11-20 00:00:03.775945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.530 [2024-11-20 00:00:03.775962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.530 [2024-11-20 00:00:03.776212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.530 [2024-11-20 00:00:03.776456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.530 [2024-11-20 00:00:03.776480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.530 [2024-11-20 00:00:03.776495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.530 [2024-11-20 00:00:03.776509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.530 [2024-11-20 00:00:03.789382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.530 [2024-11-20 00:00:03.789752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.530 [2024-11-20 00:00:03.789783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.530 [2024-11-20 00:00:03.789800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.530 [2024-11-20 00:00:03.790038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.530 [2024-11-20 00:00:03.790291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.530 [2024-11-20 00:00:03.790316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.530 [2024-11-20 00:00:03.790331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.530 [2024-11-20 00:00:03.790345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.530 [2024-11-20 00:00:03.803423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.530 [2024-11-20 00:00:03.803813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.530 [2024-11-20 00:00:03.803844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.530 [2024-11-20 00:00:03.803862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.530 [2024-11-20 00:00:03.804109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.530 [2024-11-20 00:00:03.804353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.530 [2024-11-20 00:00:03.804376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.530 [2024-11-20 00:00:03.804391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.530 [2024-11-20 00:00:03.804412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.530 [2024-11-20 00:00:03.817290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.530 [2024-11-20 00:00:03.817666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.530 [2024-11-20 00:00:03.817696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.530 [2024-11-20 00:00:03.817714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.530 [2024-11-20 00:00:03.817952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.530 [2024-11-20 00:00:03.818207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.530 [2024-11-20 00:00:03.818231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.530 [2024-11-20 00:00:03.818246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.530 [2024-11-20 00:00:03.818260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.530 [2024-11-20 00:00:03.831135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.530 [2024-11-20 00:00:03.831525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.530 [2024-11-20 00:00:03.831555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.530 [2024-11-20 00:00:03.831573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.530 [2024-11-20 00:00:03.831811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.530 [2024-11-20 00:00:03.832055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.530 [2024-11-20 00:00:03.832088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.530 [2024-11-20 00:00:03.832105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.530 [2024-11-20 00:00:03.832119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 [2024-11-20 00:00:03.846712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.789 5521.25 IOPS, 21.57 MiB/s [2024-11-19T23:00:04.101Z] [2024-11-20 00:00:03.847112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.789 [2024-11-20 00:00:03.847145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.789 [2024-11-20 00:00:03.847162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.789 [2024-11-20 00:00:03.847400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.789 [2024-11-20 00:00:03.847643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.789 [2024-11-20 00:00:03.847668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.789 [2024-11-20 00:00:03.847683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.789 [2024-11-20 00:00:03.847697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 [2024-11-20 00:00:03.860561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.789 [2024-11-20 00:00:03.860960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.789 [2024-11-20 00:00:03.860991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.789 [2024-11-20 00:00:03.861009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.789 [2024-11-20 00:00:03.861256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.789 [2024-11-20 00:00:03.861500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.789 [2024-11-20 00:00:03.861523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.789 [2024-11-20 00:00:03.861539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.789 [2024-11-20 00:00:03.861553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 [2024-11-20 00:00:03.874420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.789 [2024-11-20 00:00:03.874810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.789 [2024-11-20 00:00:03.874841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.789 [2024-11-20 00:00:03.874858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.789 [2024-11-20 00:00:03.875106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.789 [2024-11-20 00:00:03.875348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.789 [2024-11-20 00:00:03.875372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.789 [2024-11-20 00:00:03.875387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.789 [2024-11-20 00:00:03.875401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.789 [2024-11-20 00:00:03.888263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-20 00:00:03.888655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-20 00:00:03.888686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-20 00:00:03.888703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.790 [2024-11-20 00:00:03.888942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.790 [2024-11-20 00:00:03.889195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-20 00:00:03.889220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-20 00:00:03.889234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-20 00:00:03.889248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-20 00:00:03.902128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-20 00:00:03.902592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-20 00:00:03.902623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-20 00:00:03.902640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.790 [2024-11-20 00:00:03.902884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.790 [2024-11-20 00:00:03.903139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-20 00:00:03.903164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-20 00:00:03.903179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-20 00:00:03.903193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-20 00:00:03.916053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-20 00:00:03.916455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-20 00:00:03.916486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-20 00:00:03.916503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.790 [2024-11-20 00:00:03.916741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.790 [2024-11-20 00:00:03.916983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-20 00:00:03.917007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-20 00:00:03.917022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-20 00:00:03.917036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-20 00:00:03.929902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-20 00:00:03.930275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-20 00:00:03.930306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-20 00:00:03.930323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.790 [2024-11-20 00:00:03.930561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.790 [2024-11-20 00:00:03.930804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-20 00:00:03.930827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-20 00:00:03.930843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-20 00:00:03.930857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-20 00:00:03.943937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-20 00:00:03.944347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-20 00:00:03.944378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-20 00:00:03.944396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.790 [2024-11-20 00:00:03.944634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.790 [2024-11-20 00:00:03.944889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-20 00:00:03.944919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-20 00:00:03.944935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-20 00:00:03.944950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-20 00:00:03.957806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-20 00:00:03.958201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-20 00:00:03.958233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-20 00:00:03.958250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.790 [2024-11-20 00:00:03.958489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.790 [2024-11-20 00:00:03.958732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-20 00:00:03.958755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-20 00:00:03.958770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-20 00:00:03.958784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-20 00:00:03.971641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-20 00:00:03.972031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-20 00:00:03.972063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-20 00:00:03.972096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.790 [2024-11-20 00:00:03.972336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.790 [2024-11-20 00:00:03.972579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-20 00:00:03.972603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-20 00:00:03.972620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-20 00:00:03.972635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-20 00:00:03.985501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-20 00:00:03.985897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-20 00:00:03.985928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-20 00:00:03.985946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.790 [2024-11-20 00:00:03.986195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.790 [2024-11-20 00:00:03.986439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-20 00:00:03.986463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-20 00:00:03.986479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-20 00:00:03.986499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-20 00:00:03.999362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.790 [2024-11-20 00:00:03.999700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.790 [2024-11-20 00:00:03.999730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.790 [2024-11-20 00:00:03.999747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.790 [2024-11-20 00:00:03.999985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.790 [2024-11-20 00:00:04.000237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.790 [2024-11-20 00:00:04.000262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.790 [2024-11-20 00:00:04.000276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.790 [2024-11-20 00:00:04.000291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.790 [2024-11-20 00:00:04.013360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.791 [2024-11-20 00:00:04.013735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.791 [2024-11-20 00:00:04.013765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.791 [2024-11-20 00:00:04.013783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.791 [2024-11-20 00:00:04.014021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.791 [2024-11-20 00:00:04.014275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.791 [2024-11-20 00:00:04.014299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.791 [2024-11-20 00:00:04.014314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.791 [2024-11-20 00:00:04.014328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.791 [2024-11-20 00:00:04.027390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.791 [2024-11-20 00:00:04.027843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.791 [2024-11-20 00:00:04.027874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.791 [2024-11-20 00:00:04.027891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.791 [2024-11-20 00:00:04.028142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.791 [2024-11-20 00:00:04.028386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.791 [2024-11-20 00:00:04.028410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.791 [2024-11-20 00:00:04.028424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.791 [2024-11-20 00:00:04.028438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.791 [2024-11-20 00:00:04.041287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.791 [2024-11-20 00:00:04.041762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.791 [2024-11-20 00:00:04.041818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.791 [2024-11-20 00:00:04.041836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.791 [2024-11-20 00:00:04.042085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.791 [2024-11-20 00:00:04.042328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.791 [2024-11-20 00:00:04.042352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.791 [2024-11-20 00:00:04.042367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.791 [2024-11-20 00:00:04.042381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.791 [2024-11-20 00:00:04.055252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.791 [2024-11-20 00:00:04.055728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.791 [2024-11-20 00:00:04.055779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.791 [2024-11-20 00:00:04.055797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.791 [2024-11-20 00:00:04.056035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.791 [2024-11-20 00:00:04.056289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.791 [2024-11-20 00:00:04.056313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.791 [2024-11-20 00:00:04.056329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.791 [2024-11-20 00:00:04.056343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.791 [2024-11-20 00:00:04.069197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.791 [2024-11-20 00:00:04.069673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.791 [2024-11-20 00:00:04.069724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.791 [2024-11-20 00:00:04.069742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.791 [2024-11-20 00:00:04.069980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.791 [2024-11-20 00:00:04.070235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.791 [2024-11-20 00:00:04.070259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.791 [2024-11-20 00:00:04.070274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.791 [2024-11-20 00:00:04.070288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.791 [2024-11-20 00:00:04.083142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.791 [2024-11-20 00:00:04.083641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.791 [2024-11-20 00:00:04.083693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.791 [2024-11-20 00:00:04.083711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.791 [2024-11-20 00:00:04.083955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.791 [2024-11-20 00:00:04.084211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.791 [2024-11-20 00:00:04.084236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.791 [2024-11-20 00:00:04.084251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.791 [2024-11-20 00:00:04.084265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:29.791 [2024-11-20 00:00:04.097115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:29.791 [2024-11-20 00:00:04.097504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:29.791 [2024-11-20 00:00:04.097536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:29.791 [2024-11-20 00:00:04.097553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:29.791 [2024-11-20 00:00:04.097791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:29.791 [2024-11-20 00:00:04.098034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:29.791 [2024-11-20 00:00:04.098057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:29.791 [2024-11-20 00:00:04.098086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:29.791 [2024-11-20 00:00:04.098103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-20 00:00:04.110960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-20 00:00:04.111368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-20 00:00:04.111400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.050 [2024-11-20 00:00:04.111418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.050 [2024-11-20 00:00:04.111656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.050 [2024-11-20 00:00:04.111899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.050 [2024-11-20 00:00:04.111923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.050 [2024-11-20 00:00:04.111938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.050 [2024-11-20 00:00:04.111951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-20 00:00:04.124805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-20 00:00:04.125203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-20 00:00:04.125234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.050 [2024-11-20 00:00:04.125251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.050 [2024-11-20 00:00:04.125489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.050 [2024-11-20 00:00:04.125732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.050 [2024-11-20 00:00:04.125760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.050 [2024-11-20 00:00:04.125777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.050 [2024-11-20 00:00:04.125791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-20 00:00:04.138644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-20 00:00:04.139041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-20 00:00:04.139080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.050 [2024-11-20 00:00:04.139100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.050 [2024-11-20 00:00:04.139338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.050 [2024-11-20 00:00:04.139581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.050 [2024-11-20 00:00:04.139604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.050 [2024-11-20 00:00:04.139619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.050 [2024-11-20 00:00:04.139633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-20 00:00:04.152507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-20 00:00:04.152901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-20 00:00:04.152932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.050 [2024-11-20 00:00:04.152949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.050 [2024-11-20 00:00:04.153200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.050 [2024-11-20 00:00:04.153443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.050 [2024-11-20 00:00:04.153466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.050 [2024-11-20 00:00:04.153481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.050 [2024-11-20 00:00:04.153495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.050 [2024-11-20 00:00:04.166342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.050 [2024-11-20 00:00:04.166738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.050 [2024-11-20 00:00:04.166769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.050 [2024-11-20 00:00:04.166786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.051 [2024-11-20 00:00:04.167024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.051 [2024-11-20 00:00:04.167279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-20 00:00:04.167303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-20 00:00:04.167318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-20 00:00:04.167332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-20 00:00:04.180191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-20 00:00:04.180577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-20 00:00:04.180608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-20 00:00:04.180625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.051 [2024-11-20 00:00:04.180863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.051 [2024-11-20 00:00:04.181118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-20 00:00:04.181154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-20 00:00:04.181172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-20 00:00:04.181186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-20 00:00:04.194030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-20 00:00:04.194436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-20 00:00:04.194466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-20 00:00:04.194484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.051 [2024-11-20 00:00:04.194721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.051 [2024-11-20 00:00:04.194965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-20 00:00:04.194988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-20 00:00:04.195004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-20 00:00:04.195018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-20 00:00:04.207880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-20 00:00:04.208296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-20 00:00:04.208327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-20 00:00:04.208345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.051 [2024-11-20 00:00:04.208583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.051 [2024-11-20 00:00:04.208826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-20 00:00:04.208850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-20 00:00:04.208864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-20 00:00:04.208879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-20 00:00:04.221732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-20 00:00:04.222115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-20 00:00:04.222153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-20 00:00:04.222172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.051 [2024-11-20 00:00:04.222411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.051 [2024-11-20 00:00:04.222654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-20 00:00:04.222678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-20 00:00:04.222693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-20 00:00:04.222707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-20 00:00:04.235568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-20 00:00:04.235956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-20 00:00:04.235987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-20 00:00:04.236005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.051 [2024-11-20 00:00:04.236254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.051 [2024-11-20 00:00:04.236499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-20 00:00:04.236523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-20 00:00:04.236539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-20 00:00:04.236553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-20 00:00:04.249420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-20 00:00:04.249811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-20 00:00:04.249841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-20 00:00:04.249859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.051 [2024-11-20 00:00:04.250106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.051 [2024-11-20 00:00:04.250349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-20 00:00:04.250373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-20 00:00:04.250388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-20 00:00:04.250401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-20 00:00:04.263264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-20 00:00:04.263661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-20 00:00:04.263692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-20 00:00:04.263709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.051 [2024-11-20 00:00:04.263952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.051 [2024-11-20 00:00:04.264209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-20 00:00:04.264233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-20 00:00:04.264248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-20 00:00:04.264263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.051 [2024-11-20 00:00:04.277140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.051 [2024-11-20 00:00:04.277518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.051 [2024-11-20 00:00:04.277549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.051 [2024-11-20 00:00:04.277567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.051 [2024-11-20 00:00:04.277805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.051 [2024-11-20 00:00:04.278048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.051 [2024-11-20 00:00:04.278085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.051 [2024-11-20 00:00:04.278103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.051 [2024-11-20 00:00:04.278117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.052 [2024-11-20 00:00:04.291045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.052 [2024-11-20 00:00:04.291439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.052 [2024-11-20 00:00:04.291471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.052 [2024-11-20 00:00:04.291488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.052 [2024-11-20 00:00:04.291726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.052 [2024-11-20 00:00:04.291970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.052 [2024-11-20 00:00:04.291993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.052 [2024-11-20 00:00:04.292009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.052 [2024-11-20 00:00:04.292022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.052 [2024-11-20 00:00:04.304900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.052 [2024-11-20 00:00:04.305298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.052 [2024-11-20 00:00:04.305330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.052 [2024-11-20 00:00:04.305347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.052 [2024-11-20 00:00:04.305585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.052 [2024-11-20 00:00:04.305829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.052 [2024-11-20 00:00:04.305853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.052 [2024-11-20 00:00:04.305874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.052 [2024-11-20 00:00:04.305889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.052 [2024-11-20 00:00:04.318780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.052 [2024-11-20 00:00:04.319161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.052 [2024-11-20 00:00:04.319192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.052 [2024-11-20 00:00:04.319210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.052 [2024-11-20 00:00:04.319447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.052 [2024-11-20 00:00:04.319690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.052 [2024-11-20 00:00:04.319714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.052 [2024-11-20 00:00:04.319729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.052 [2024-11-20 00:00:04.319743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.052 [2024-11-20 00:00:04.332651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.052 [2024-11-20 00:00:04.333044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.052 [2024-11-20 00:00:04.333084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.052 [2024-11-20 00:00:04.333115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.052 [2024-11-20 00:00:04.333354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.052 [2024-11-20 00:00:04.333603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.052 [2024-11-20 00:00:04.333626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.052 [2024-11-20 00:00:04.333641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.052 [2024-11-20 00:00:04.333656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.052 [2024-11-20 00:00:04.346560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.052 [2024-11-20 00:00:04.346949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.052 [2024-11-20 00:00:04.346979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.052 [2024-11-20 00:00:04.346997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.052 [2024-11-20 00:00:04.347250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.052 [2024-11-20 00:00:04.347494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.052 [2024-11-20 00:00:04.347517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.052 [2024-11-20 00:00:04.347532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.052 [2024-11-20 00:00:04.347546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-20 00:00:04.360438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-20 00:00:04.360834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-20 00:00:04.360864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-20 00:00:04.360882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.312 [2024-11-20 00:00:04.361131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.312 [2024-11-20 00:00:04.361376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-20 00:00:04.361399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-20 00:00:04.361414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-20 00:00:04.361427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-20 00:00:04.374319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-20 00:00:04.374700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-20 00:00:04.374731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-20 00:00:04.374749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.312 [2024-11-20 00:00:04.374987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.312 [2024-11-20 00:00:04.375242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-20 00:00:04.375267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-20 00:00:04.375282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-20 00:00:04.375296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-20 00:00:04.388178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-20 00:00:04.388579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-20 00:00:04.388610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-20 00:00:04.388627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.312 [2024-11-20 00:00:04.388864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.312 [2024-11-20 00:00:04.389119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-20 00:00:04.389151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-20 00:00:04.389170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-20 00:00:04.389184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-20 00:00:04.402053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-20 00:00:04.402441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-20 00:00:04.402471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-20 00:00:04.402496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.312 [2024-11-20 00:00:04.402735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.312 [2024-11-20 00:00:04.402979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-20 00:00:04.403002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-20 00:00:04.403017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-20 00:00:04.403031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-20 00:00:04.415923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-20 00:00:04.416302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-20 00:00:04.416333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-20 00:00:04.416351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.312 [2024-11-20 00:00:04.416590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.312 [2024-11-20 00:00:04.416832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-20 00:00:04.416855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-20 00:00:04.416870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-20 00:00:04.416884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-20 00:00:04.429761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-20 00:00:04.430127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-20 00:00:04.430158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-20 00:00:04.430176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.312 [2024-11-20 00:00:04.430415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.312 [2024-11-20 00:00:04.430658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-20 00:00:04.430681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-20 00:00:04.430695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-20 00:00:04.430709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-20 00:00:04.443783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-20 00:00:04.444132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-20 00:00:04.444163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-20 00:00:04.444181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.312 [2024-11-20 00:00:04.444419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.312 [2024-11-20 00:00:04.444671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-20 00:00:04.444695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-20 00:00:04.444710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-20 00:00:04.444723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-20 00:00:04.457818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-20 00:00:04.458220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-20 00:00:04.458251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-20 00:00:04.458268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.312 [2024-11-20 00:00:04.458506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.312 [2024-11-20 00:00:04.458749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.312 [2024-11-20 00:00:04.458772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.312 [2024-11-20 00:00:04.458788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.312 [2024-11-20 00:00:04.458802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.312 [2024-11-20 00:00:04.471672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.312 [2024-11-20 00:00:04.472048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.312 [2024-11-20 00:00:04.472086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.312 [2024-11-20 00:00:04.472105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.312 [2024-11-20 00:00:04.472344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.312 [2024-11-20 00:00:04.472587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.313 [2024-11-20 00:00:04.472611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.313 [2024-11-20 00:00:04.472626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.313 [2024-11-20 00:00:04.472640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.313 [2024-11-20 00:00:04.485507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.313 [2024-11-20 00:00:04.485872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.313 [2024-11-20 00:00:04.485904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.313 [2024-11-20 00:00:04.485921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.313 [2024-11-20 00:00:04.486172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.313 [2024-11-20 00:00:04.486416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.313 [2024-11-20 00:00:04.486440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.313 [2024-11-20 00:00:04.486461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.313 [2024-11-20 00:00:04.486477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.313 [2024-11-20 00:00:04.499738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.313 [2024-11-20 00:00:04.500108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.313 [2024-11-20 00:00:04.500140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.313 [2024-11-20 00:00:04.500158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.313 [2024-11-20 00:00:04.500396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.313 [2024-11-20 00:00:04.500640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.313 [2024-11-20 00:00:04.500664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.313 [2024-11-20 00:00:04.500678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.313 [2024-11-20 00:00:04.500692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.313 [2024-11-20 00:00:04.513769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.313 [2024-11-20 00:00:04.514161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.313 [2024-11-20 00:00:04.514193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.313 [2024-11-20 00:00:04.514211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.313 [2024-11-20 00:00:04.514448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.313 [2024-11-20 00:00:04.514692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.313 [2024-11-20 00:00:04.514715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.313 [2024-11-20 00:00:04.514730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.313 [2024-11-20 00:00:04.514744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.313 [2024-11-20 00:00:04.527803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.313 [2024-11-20 00:00:04.528183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.313 [2024-11-20 00:00:04.528215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.313 [2024-11-20 00:00:04.528232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.313 [2024-11-20 00:00:04.528470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.313 [2024-11-20 00:00:04.528713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.313 [2024-11-20 00:00:04.528736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.313 [2024-11-20 00:00:04.528752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.313 [2024-11-20 00:00:04.528765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.313 [2024-11-20 00:00:04.541848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.313 [2024-11-20 00:00:04.542252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.313 [2024-11-20 00:00:04.542283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.313 [2024-11-20 00:00:04.542301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.313 [2024-11-20 00:00:04.542539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.313 [2024-11-20 00:00:04.542782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.313 [2024-11-20 00:00:04.542805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.313 [2024-11-20 00:00:04.542820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.313 [2024-11-20 00:00:04.542834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.313 [2024-11-20 00:00:04.555713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.313 [2024-11-20 00:00:04.556091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.313 [2024-11-20 00:00:04.556122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.313 [2024-11-20 00:00:04.556140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.313 [2024-11-20 00:00:04.556379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.313 [2024-11-20 00:00:04.556622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.313 [2024-11-20 00:00:04.556645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.313 [2024-11-20 00:00:04.556660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.313 [2024-11-20 00:00:04.556674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.313 [2024-11-20 00:00:04.569743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.313 [2024-11-20 00:00:04.570131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.313 [2024-11-20 00:00:04.570163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.313 [2024-11-20 00:00:04.570180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.313 [2024-11-20 00:00:04.570419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.313 [2024-11-20 00:00:04.570663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.313 [2024-11-20 00:00:04.570686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.313 [2024-11-20 00:00:04.570701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.313 [2024-11-20 00:00:04.570715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.313 [2024-11-20 00:00:04.583776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.313 [2024-11-20 00:00:04.584117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.313 [2024-11-20 00:00:04.584149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.313 [2024-11-20 00:00:04.584173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.313 [2024-11-20 00:00:04.584411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.313 [2024-11-20 00:00:04.584655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.313 [2024-11-20 00:00:04.584679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.313 [2024-11-20 00:00:04.584694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.313 [2024-11-20 00:00:04.584708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.313 [2024-11-20 00:00:04.597777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.313 [2024-11-20 00:00:04.598197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.313 [2024-11-20 00:00:04.598227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.313 [2024-11-20 00:00:04.598245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.314 [2024-11-20 00:00:04.598483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.314 [2024-11-20 00:00:04.598726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.314 [2024-11-20 00:00:04.598749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.314 [2024-11-20 00:00:04.598765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.314 [2024-11-20 00:00:04.598779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.314 [2024-11-20 00:00:04.611643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.314 [2024-11-20 00:00:04.612017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.314 [2024-11-20 00:00:04.612047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.314 [2024-11-20 00:00:04.612065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.314 [2024-11-20 00:00:04.612318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.314 [2024-11-20 00:00:04.612561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.314 [2024-11-20 00:00:04.612584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.314 [2024-11-20 00:00:04.612599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.314 [2024-11-20 00:00:04.612613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.573 [2024-11-20 00:00:04.625683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-20 00:00:04.626062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-20 00:00:04.626100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-20 00:00:04.626118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.573 [2024-11-20 00:00:04.626357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.573 [2024-11-20 00:00:04.626606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.573 [2024-11-20 00:00:04.626630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.573 [2024-11-20 00:00:04.626645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.573 [2024-11-20 00:00:04.626658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.573 [2024-11-20 00:00:04.639516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-20 00:00:04.639911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-20 00:00:04.639942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-20 00:00:04.639960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.573 [2024-11-20 00:00:04.640211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.573 [2024-11-20 00:00:04.640455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.573 [2024-11-20 00:00:04.640478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.573 [2024-11-20 00:00:04.640493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.573 [2024-11-20 00:00:04.640507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.573 [2024-11-20 00:00:04.653386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.573 [2024-11-20 00:00:04.653793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.573 [2024-11-20 00:00:04.653824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.573 [2024-11-20 00:00:04.653841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.573 [2024-11-20 00:00:04.654091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.573 [2024-11-20 00:00:04.654335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-20 00:00:04.654359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-20 00:00:04.654374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-20 00:00:04.654387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-20 00:00:04.667245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-20 00:00:04.667610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-20 00:00:04.667641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-20 00:00:04.667658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.574 [2024-11-20 00:00:04.667896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.574 [2024-11-20 00:00:04.668152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-20 00:00:04.668176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-20 00:00:04.668198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-20 00:00:04.668213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-20 00:00:04.681279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-20 00:00:04.681678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-20 00:00:04.681709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-20 00:00:04.681728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.574 [2024-11-20 00:00:04.681967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.574 [2024-11-20 00:00:04.682220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-20 00:00:04.682244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-20 00:00:04.682259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-20 00:00:04.682274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-20 00:00:04.695125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-20 00:00:04.695518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-20 00:00:04.695549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-20 00:00:04.695566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.574 [2024-11-20 00:00:04.695805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.574 [2024-11-20 00:00:04.696049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-20 00:00:04.696081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-20 00:00:04.696098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-20 00:00:04.696113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-20 00:00:04.708962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-20 00:00:04.709348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-20 00:00:04.709380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-20 00:00:04.709398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.574 [2024-11-20 00:00:04.709636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.574 [2024-11-20 00:00:04.709879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-20 00:00:04.709904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-20 00:00:04.709919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-20 00:00:04.709933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-20 00:00:04.723000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-20 00:00:04.723416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-20 00:00:04.723447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-20 00:00:04.723464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.574 [2024-11-20 00:00:04.723702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.574 [2024-11-20 00:00:04.723945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-20 00:00:04.723969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-20 00:00:04.723986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-20 00:00:04.724000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-20 00:00:04.736869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-20 00:00:04.737259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-20 00:00:04.737290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-20 00:00:04.737308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.574 [2024-11-20 00:00:04.737546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.574 [2024-11-20 00:00:04.737790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-20 00:00:04.737813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-20 00:00:04.737828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-20 00:00:04.737842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-20 00:00:04.750728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-20 00:00:04.751100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-20 00:00:04.751132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-20 00:00:04.751149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.574 [2024-11-20 00:00:04.751387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.574 [2024-11-20 00:00:04.751631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-20 00:00:04.751654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-20 00:00:04.751669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-20 00:00:04.751683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-20 00:00:04.764764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-20 00:00:04.765135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-20 00:00:04.765166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-20 00:00:04.765190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.574 [2024-11-20 00:00:04.765429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.574 [2024-11-20 00:00:04.765673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-20 00:00:04.765696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-20 00:00:04.765711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-20 00:00:04.765725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-20 00:00:04.778614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-20 00:00:04.778978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-20 00:00:04.779010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-20 00:00:04.779029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.574 [2024-11-20 00:00:04.779276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.574 [2024-11-20 00:00:04.779522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.574 [2024-11-20 00:00:04.779547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.574 [2024-11-20 00:00:04.779563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.574 [2024-11-20 00:00:04.779578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.574 [2024-11-20 00:00:04.792460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.574 [2024-11-20 00:00:04.792859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.574 [2024-11-20 00:00:04.792890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.574 [2024-11-20 00:00:04.792907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.574 [2024-11-20 00:00:04.793157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.575 [2024-11-20 00:00:04.793400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.575 [2024-11-20 00:00:04.793424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.575 [2024-11-20 00:00:04.793439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.575 [2024-11-20 00:00:04.793453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.575 [2024-11-20 00:00:04.806307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.575 [2024-11-20 00:00:04.806709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.575 [2024-11-20 00:00:04.806741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.575 [2024-11-20 00:00:04.806760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.575 [2024-11-20 00:00:04.806999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.575 [2024-11-20 00:00:04.807257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.575 [2024-11-20 00:00:04.807282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.575 [2024-11-20 00:00:04.807297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.575 [2024-11-20 00:00:04.807311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.575 [2024-11-20 00:00:04.820174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.575 [2024-11-20 00:00:04.820563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.575 [2024-11-20 00:00:04.820594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.575 [2024-11-20 00:00:04.820611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.575 [2024-11-20 00:00:04.820850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.575 [2024-11-20 00:00:04.821104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.575 [2024-11-20 00:00:04.821128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.575 [2024-11-20 00:00:04.821142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.575 [2024-11-20 00:00:04.821156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.575 [2024-11-20 00:00:04.834017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.575 [2024-11-20 00:00:04.834372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.575 [2024-11-20 00:00:04.834403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.575 [2024-11-20 00:00:04.834421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.575 [2024-11-20 00:00:04.834659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.575 [2024-11-20 00:00:04.834903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.575 [2024-11-20 00:00:04.834926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.575 [2024-11-20 00:00:04.834941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.575 [2024-11-20 00:00:04.834955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.575 4417.00 IOPS, 17.25 MiB/s [2024-11-19T23:00:04.887Z] [2024-11-20 00:00:04.849784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.575 [2024-11-20 00:00:04.850176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.575 [2024-11-20 00:00:04.850208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.575 [2024-11-20 00:00:04.850225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.575 [2024-11-20 00:00:04.850464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.575 [2024-11-20 00:00:04.850707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.575 [2024-11-20 00:00:04.850731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.575 [2024-11-20 00:00:04.850752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.575 [2024-11-20 00:00:04.850767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.575 [2024-11-20 00:00:04.863645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.575 [2024-11-20 00:00:04.864038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.575 [2024-11-20 00:00:04.864078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.575 [2024-11-20 00:00:04.864099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.575 [2024-11-20 00:00:04.864337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.575 [2024-11-20 00:00:04.864580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.575 [2024-11-20 00:00:04.864604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.575 [2024-11-20 00:00:04.864619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.575 [2024-11-20 00:00:04.864633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.575 [2024-11-20 00:00:04.877493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.575 [2024-11-20 00:00:04.877881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.575 [2024-11-20 00:00:04.877912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.575 [2024-11-20 00:00:04.877930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.575 [2024-11-20 00:00:04.878179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.575 [2024-11-20 00:00:04.878423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.575 [2024-11-20 00:00:04.878447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.575 [2024-11-20 00:00:04.878461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.575 [2024-11-20 00:00:04.878476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.833 [2024-11-20 00:00:04.891355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:04.891727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:04.891759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:04.891776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:04.892015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:04.892268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:04.892293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:04.892308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:04.892322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:04.905393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:04.905774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:04.905805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:04.905823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:04.906061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:04.906317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:04.906340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:04.906355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:04.906369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:04.919435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:04.919806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:04.919836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:04.919854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:04.920106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:04.920349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:04.920373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:04.920388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:04.920401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:04.933472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:04.933846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:04.933876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:04.933894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:04.934144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:04.934387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:04.934411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:04.934427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:04.934440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:04.947313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:04.947718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:04.947749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:04.947773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:04.948011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:04.948263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:04.948288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:04.948303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:04.948316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:04.961200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:04.961600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:04.961631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:04.961649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:04.961889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:04.962142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:04.962167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:04.962182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:04.962196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:04.975064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:04.975441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:04.975473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:04.975491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:04.975737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:04.975991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:04.976018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:04.976033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:04.976048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:04.988926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:04.989302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:04.989334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:04.989352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:04.989590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:04.989851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:04.989875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:04.989890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:04.989904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:05.002811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:05.003194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:05.003226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:05.003244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:05.003482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:05.003726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:05.003749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:05.003765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:05.003779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:05.016857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:05.017239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:05.017270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:05.017288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:05.017526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:05.017769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:05.017793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:05.017808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:05.017821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:05.030900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:05.031273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:05.031306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:05.031324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:05.031563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:05.031806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:05.031830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:05.031845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:05.031865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:05.044938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:05.045321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:05.045351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:05.045369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:05.045607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:05.045851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:05.045874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:05.045889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:05.045903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:05.058795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:05.059172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:05.059204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:05.059222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:05.059461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:05.059704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:05.059727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:05.059742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:05.059756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:05.072638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:05.073030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:05.073060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:05.073087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:05.073326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:05.073569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:05.073592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:05.073608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:05.073622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:05.086490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:05.086874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:05.086905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:05.086923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:05.087174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:05.087417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:05.087440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:05.087455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:05.087470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.834 [2024-11-20 00:00:05.100348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.834 [2024-11-20 00:00:05.100712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.834 [2024-11-20 00:00:05.100743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.834 [2024-11-20 00:00:05.100761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.834 [2024-11-20 00:00:05.100998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.834 [2024-11-20 00:00:05.101256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.834 [2024-11-20 00:00:05.101281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.834 [2024-11-20 00:00:05.101296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.834 [2024-11-20 00:00:05.101309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-20 00:00:05.114212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-20 00:00:05.114576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-20 00:00:05.114606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-20 00:00:05.114624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.835 [2024-11-20 00:00:05.114861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.835 [2024-11-20 00:00:05.115114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-20 00:00:05.115139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-20 00:00:05.115154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-20 00:00:05.115168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-20 00:00:05.128241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-20 00:00:05.128621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-20 00:00:05.128651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-20 00:00:05.128669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.835 [2024-11-20 00:00:05.128918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.835 [2024-11-20 00:00:05.129176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-20 00:00:05.129201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-20 00:00:05.129216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-20 00:00:05.129230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.835 [2024-11-20 00:00:05.142089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.835 [2024-11-20 00:00:05.142434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.835 [2024-11-20 00:00:05.142468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:30.835 [2024-11-20 00:00:05.142486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:30.835 [2024-11-20 00:00:05.142724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:30.835 [2024-11-20 00:00:05.142968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.835 [2024-11-20 00:00:05.142991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.835 [2024-11-20 00:00:05.143006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.835 [2024-11-20 00:00:05.143020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.094 [2024-11-20 00:00:05.156124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.094 [2024-11-20 00:00:05.156522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.094 [2024-11-20 00:00:05.156553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.094 [2024-11-20 00:00:05.156570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.094 [2024-11-20 00:00:05.156809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.094 [2024-11-20 00:00:05.157053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.094 [2024-11-20 00:00:05.157087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.094 [2024-11-20 00:00:05.157104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.094 [2024-11-20 00:00:05.157118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.094 [2024-11-20 00:00:05.169974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.094 [2024-11-20 00:00:05.170384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.094 [2024-11-20 00:00:05.170415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.094 [2024-11-20 00:00:05.170432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.094 [2024-11-20 00:00:05.170670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.094 [2024-11-20 00:00:05.170914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.094 [2024-11-20 00:00:05.170943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.094 [2024-11-20 00:00:05.170959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.094 [2024-11-20 00:00:05.170973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.094 [2024-11-20 00:00:05.183894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.094 [2024-11-20 00:00:05.184277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.094 [2024-11-20 00:00:05.184308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.094 [2024-11-20 00:00:05.184325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.094 [2024-11-20 00:00:05.184565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.094 [2024-11-20 00:00:05.184808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.094 [2024-11-20 00:00:05.184831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.094 [2024-11-20 00:00:05.184847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.094 [2024-11-20 00:00:05.184860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.094 [2024-11-20 00:00:05.197729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.094 [2024-11-20 00:00:05.198109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.094 [2024-11-20 00:00:05.198140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.094 [2024-11-20 00:00:05.198158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.094 [2024-11-20 00:00:05.198396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.094 [2024-11-20 00:00:05.198640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.094 [2024-11-20 00:00:05.198663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.094 [2024-11-20 00:00:05.198678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.094 [2024-11-20 00:00:05.198692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.094 [2024-11-20 00:00:05.211772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.094 [2024-11-20 00:00:05.212152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.094 [2024-11-20 00:00:05.212183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.094 [2024-11-20 00:00:05.212202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.094 [2024-11-20 00:00:05.212440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.094 [2024-11-20 00:00:05.212684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.094 [2024-11-20 00:00:05.212707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.094 [2024-11-20 00:00:05.212721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.094 [2024-11-20 00:00:05.212741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.094 [2024-11-20 00:00:05.225610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.094 [2024-11-20 00:00:05.225974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.094 [2024-11-20 00:00:05.226006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.094 [2024-11-20 00:00:05.226024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.094 [2024-11-20 00:00:05.226275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.094 [2024-11-20 00:00:05.226520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.094 [2024-11-20 00:00:05.226544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.094 [2024-11-20 00:00:05.226559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.094 [2024-11-20 00:00:05.226573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.094 [2024-11-20 00:00:05.239643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.094 [2024-11-20 00:00:05.240020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.094 [2024-11-20 00:00:05.240051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.094 [2024-11-20 00:00:05.240078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.094 [2024-11-20 00:00:05.240319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.094 [2024-11-20 00:00:05.240563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.094 [2024-11-20 00:00:05.240587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.094 [2024-11-20 00:00:05.240602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.094 [2024-11-20 00:00:05.240616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.094 [2024-11-20 00:00:05.253521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.094 [2024-11-20 00:00:05.253932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.094 [2024-11-20 00:00:05.253963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.094 [2024-11-20 00:00:05.253980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.094 [2024-11-20 00:00:05.254228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.094 [2024-11-20 00:00:05.254471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.094 [2024-11-20 00:00:05.254494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.094 [2024-11-20 00:00:05.254510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.095 [2024-11-20 00:00:05.254524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.095 [2024-11-20 00:00:05.267382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.095 [2024-11-20 00:00:05.267781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.095 [2024-11-20 00:00:05.267812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.095 [2024-11-20 00:00:05.267829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.095 [2024-11-20 00:00:05.268079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.095 [2024-11-20 00:00:05.268334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.095 [2024-11-20 00:00:05.268359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.095 [2024-11-20 00:00:05.268374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.095 [2024-11-20 00:00:05.268389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.095 [2024-11-20 00:00:05.281243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.095 [2024-11-20 00:00:05.281635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.095 [2024-11-20 00:00:05.281666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.095 [2024-11-20 00:00:05.281683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.095 [2024-11-20 00:00:05.281921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.095 [2024-11-20 00:00:05.282178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.095 [2024-11-20 00:00:05.282202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.095 [2024-11-20 00:00:05.282217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.095 [2024-11-20 00:00:05.282231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.095 [2024-11-20 00:00:05.295085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.095 [2024-11-20 00:00:05.295459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.095 [2024-11-20 00:00:05.295490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.095 [2024-11-20 00:00:05.295508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.095 [2024-11-20 00:00:05.295745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.095 [2024-11-20 00:00:05.295989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.095 [2024-11-20 00:00:05.296013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.095 [2024-11-20 00:00:05.296028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.095 [2024-11-20 00:00:05.296041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.095 [2024-11-20 00:00:05.308908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.095 [2024-11-20 00:00:05.309319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.095 [2024-11-20 00:00:05.309350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.095 [2024-11-20 00:00:05.309368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.095 [2024-11-20 00:00:05.309612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.095 [2024-11-20 00:00:05.309855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.095 [2024-11-20 00:00:05.309879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.095 [2024-11-20 00:00:05.309894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.095 [2024-11-20 00:00:05.309908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.095 [2024-11-20 00:00:05.322767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.095 [2024-11-20 00:00:05.323149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.095 [2024-11-20 00:00:05.323180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.095 [2024-11-20 00:00:05.323197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.095 [2024-11-20 00:00:05.323436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.095 [2024-11-20 00:00:05.323679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.095 [2024-11-20 00:00:05.323702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.095 [2024-11-20 00:00:05.323719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.095 [2024-11-20 00:00:05.323734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.095 [2024-11-20 00:00:05.336806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.095 [2024-11-20 00:00:05.337216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.095 [2024-11-20 00:00:05.337248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.095 [2024-11-20 00:00:05.337266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.095 [2024-11-20 00:00:05.337511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.095 [2024-11-20 00:00:05.337753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.095 [2024-11-20 00:00:05.337776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.095 [2024-11-20 00:00:05.337791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.095 [2024-11-20 00:00:05.337806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.095 [2024-11-20 00:00:05.350677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.095 [2024-11-20 00:00:05.351077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.095 [2024-11-20 00:00:05.351109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.095 [2024-11-20 00:00:05.351127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.095 [2024-11-20 00:00:05.351376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.095 [2024-11-20 00:00:05.351620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.095 [2024-11-20 00:00:05.351649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.095 [2024-11-20 00:00:05.351665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.095 [2024-11-20 00:00:05.351679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.095 [2024-11-20 00:00:05.364547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.095 [2024-11-20 00:00:05.364921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.095 [2024-11-20 00:00:05.364953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.095 [2024-11-20 00:00:05.364970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.095 [2024-11-20 00:00:05.365221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.095 [2024-11-20 00:00:05.365464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.095 [2024-11-20 00:00:05.365488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.095 [2024-11-20 00:00:05.365503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.095 [2024-11-20 00:00:05.365517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.095 [2024-11-20 00:00:05.378389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.095 [2024-11-20 00:00:05.378802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.095 [2024-11-20 00:00:05.378832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.095 [2024-11-20 00:00:05.378850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.095 [2024-11-20 00:00:05.379100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.095 [2024-11-20 00:00:05.379344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.095 [2024-11-20 00:00:05.379368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.095 [2024-11-20 00:00:05.379383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.095 [2024-11-20 00:00:05.379397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.095 [2024-11-20 00:00:05.392256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.095 [2024-11-20 00:00:05.392652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.095 [2024-11-20 00:00:05.392683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.095 [2024-11-20 00:00:05.392700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.095 [2024-11-20 00:00:05.392938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.095 [2024-11-20 00:00:05.393195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.095 [2024-11-20 00:00:05.393219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.095 [2024-11-20 00:00:05.393235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.095 [2024-11-20 00:00:05.393254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.354 [2024-11-20 00:00:05.406115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.354 [2024-11-20 00:00:05.406503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.354 [2024-11-20 00:00:05.406534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.354 [2024-11-20 00:00:05.406552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.354 [2024-11-20 00:00:05.406790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.354 [2024-11-20 00:00:05.407033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.354 [2024-11-20 00:00:05.407056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.354 [2024-11-20 00:00:05.407082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.354 [2024-11-20 00:00:05.407100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.354 [2024-11-20 00:00:05.419951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.354 [2024-11-20 00:00:05.420325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.354 [2024-11-20 00:00:05.420356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.354 [2024-11-20 00:00:05.420374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.354 [2024-11-20 00:00:05.420612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.354 [2024-11-20 00:00:05.420856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.354 [2024-11-20 00:00:05.420879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.354 [2024-11-20 00:00:05.420894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.354 [2024-11-20 00:00:05.420908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.354 [2024-11-20 00:00:05.433982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.354 [2024-11-20 00:00:05.434377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.354 [2024-11-20 00:00:05.434408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.354 [2024-11-20 00:00:05.434426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.354 [2024-11-20 00:00:05.434664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.354 [2024-11-20 00:00:05.434908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.354 [2024-11-20 00:00:05.434931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.354 [2024-11-20 00:00:05.434946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.354 [2024-11-20 00:00:05.434960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.354 [2024-11-20 00:00:05.447857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.354 [2024-11-20 00:00:05.448269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.354 [2024-11-20 00:00:05.448306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.354 [2024-11-20 00:00:05.448324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.354 [2024-11-20 00:00:05.448562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.354 [2024-11-20 00:00:05.448805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.354 [2024-11-20 00:00:05.448829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.354 [2024-11-20 00:00:05.448844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.354 [2024-11-20 00:00:05.448858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 337661 Killed "${NVMF_APP[@]}" "$@" 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=338729 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 338729 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 338729 ']' 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:31.354 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.354 [2024-11-20 00:00:05.461747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.354 [2024-11-20 00:00:05.462148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.354 [2024-11-20 00:00:05.462180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.354 [2024-11-20 00:00:05.462198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.354 [2024-11-20 00:00:05.462436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.354 [2024-11-20 00:00:05.462679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.354 [2024-11-20 00:00:05.462702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.354 [2024-11-20 00:00:05.462717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.354 [2024-11-20 00:00:05.462733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.354 [2024-11-20 00:00:05.475596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.354 [2024-11-20 00:00:05.475984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.354 [2024-11-20 00:00:05.476014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.354 [2024-11-20 00:00:05.476032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.354 [2024-11-20 00:00:05.476279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.354 [2024-11-20 00:00:05.476523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.354 [2024-11-20 00:00:05.476547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.354 [2024-11-20 00:00:05.476562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.354 [2024-11-20 00:00:05.476577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.354 [2024-11-20 00:00:05.489442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.354 [2024-11-20 00:00:05.489842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.354 [2024-11-20 00:00:05.489872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.354 [2024-11-20 00:00:05.489891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.354 [2024-11-20 00:00:05.490141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.354 [2024-11-20 00:00:05.490384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.354 [2024-11-20 00:00:05.490415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.354 [2024-11-20 00:00:05.490430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.354 [2024-11-20 00:00:05.490444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.354 [2024-11-20 00:00:05.503315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.354 [2024-11-20 00:00:05.503706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.354 [2024-11-20 00:00:05.503738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.354 [2024-11-20 00:00:05.503756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.354 [2024-11-20 00:00:05.503994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.354 [2024-11-20 00:00:05.504247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.354 [2024-11-20 00:00:05.504272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.354 [2024-11-20 00:00:05.504287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.354 [2024-11-20 00:00:05.504301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.354 [2024-11-20 00:00:05.508218] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:35:31.354 [2024-11-20 00:00:05.508302] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:31.354 [2024-11-20 00:00:05.517160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.354 [2024-11-20 00:00:05.517554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.354 [2024-11-20 00:00:05.517585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.354 [2024-11-20 00:00:05.517604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.355 [2024-11-20 00:00:05.517842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.355 [2024-11-20 00:00:05.518096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.355 [2024-11-20 00:00:05.518121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.355 [2024-11-20 00:00:05.518137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.355 [2024-11-20 00:00:05.518150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.355 [2024-11-20 00:00:05.531002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.355 [2024-11-20 00:00:05.531378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.355 [2024-11-20 00:00:05.531409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.355 [2024-11-20 00:00:05.531427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.355 [2024-11-20 00:00:05.531665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.355 [2024-11-20 00:00:05.531908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.355 [2024-11-20 00:00:05.531932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.355 [2024-11-20 00:00:05.531948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.355 [2024-11-20 00:00:05.531962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.355 [2024-11-20 00:00:05.545034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.355 [2024-11-20 00:00:05.545474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.355 [2024-11-20 00:00:05.545505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.355 [2024-11-20 00:00:05.545523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.355 [2024-11-20 00:00:05.545762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.355 [2024-11-20 00:00:05.546005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.355 [2024-11-20 00:00:05.546029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.355 [2024-11-20 00:00:05.546044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.355 [2024-11-20 00:00:05.546058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.355 [2024-11-20 00:00:05.558956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.355 [2024-11-20 00:00:05.559369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.355 [2024-11-20 00:00:05.559400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.355 [2024-11-20 00:00:05.559423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.355 [2024-11-20 00:00:05.559662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.355 [2024-11-20 00:00:05.559906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.355 [2024-11-20 00:00:05.559929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.355 [2024-11-20 00:00:05.559944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.355 [2024-11-20 00:00:05.559958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.355 [2024-11-20 00:00:05.572835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.355 [2024-11-20 00:00:05.573243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.355 [2024-11-20 00:00:05.573274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.355 [2024-11-20 00:00:05.573292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.355 [2024-11-20 00:00:05.573538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.355 [2024-11-20 00:00:05.573781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.355 [2024-11-20 00:00:05.573805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.355 [2024-11-20 00:00:05.573820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.355 [2024-11-20 00:00:05.573834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.355 [2024-11-20 00:00:05.586707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.355 [2024-11-20 00:00:05.587085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.355 [2024-11-20 00:00:05.587117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.355 [2024-11-20 00:00:05.587134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.355 [2024-11-20 00:00:05.587372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.355 [2024-11-20 00:00:05.587615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.355 [2024-11-20 00:00:05.587639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.355 [2024-11-20 00:00:05.587654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.355 [2024-11-20 00:00:05.587668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.355 [2024-11-20 00:00:05.591148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:31.355 [2024-11-20 00:00:05.600587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.355 [2024-11-20 00:00:05.601082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.355 [2024-11-20 00:00:05.601122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.355 [2024-11-20 00:00:05.601142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.355 [2024-11-20 00:00:05.601398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.355 [2024-11-20 00:00:05.601645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.355 [2024-11-20 00:00:05.601669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.355 [2024-11-20 00:00:05.601686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.355 [2024-11-20 00:00:05.601704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.355 [2024-11-20 00:00:05.614601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.355 [2024-11-20 00:00:05.615110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.355 [2024-11-20 00:00:05.615148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.355 [2024-11-20 00:00:05.615167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.355 [2024-11-20 00:00:05.615410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.355 [2024-11-20 00:00:05.615656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.355 [2024-11-20 00:00:05.615680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.355 [2024-11-20 00:00:05.615696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.355 [2024-11-20 00:00:05.615712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.355 [2024-11-20 00:00:05.628591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.355 [2024-11-20 00:00:05.628966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.355 [2024-11-20 00:00:05.628997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.355 [2024-11-20 00:00:05.629015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.355 [2024-11-20 00:00:05.629266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.355 [2024-11-20 00:00:05.629510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.355 [2024-11-20 00:00:05.629534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.355 [2024-11-20 00:00:05.629550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.355 [2024-11-20 00:00:05.629564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.355 [2024-11-20 00:00:05.641863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:31.355 [2024-11-20 00:00:05.641904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:31.355 [2024-11-20 00:00:05.641920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:31.355 [2024-11-20 00:00:05.641933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:31.355 [2024-11-20 00:00:05.641944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:31.355 [2024-11-20 00:00:05.642634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.355 [2024-11-20 00:00:05.643015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.355 [2024-11-20 00:00:05.643057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.355 [2024-11-20 00:00:05.643093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.355 [2024-11-20 00:00:05.643335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.355 [2024-11-20 00:00:05.643515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:31.355 [2024-11-20 00:00:05.643588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.355 [2024-11-20 00:00:05.643613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.355 [2024-11-20 00:00:05.643628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.355 [2024-11-20 00:00:05.643643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.355 [2024-11-20 00:00:05.643576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:31.355 [2024-11-20 00:00:05.643580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.355 [2024-11-20 00:00:05.656579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.355 [2024-11-20 00:00:05.657103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.355 [2024-11-20 00:00:05.657146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.355 [2024-11-20 00:00:05.657167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.355 [2024-11-20 00:00:05.657414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.355 [2024-11-20 00:00:05.657662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.355 [2024-11-20 00:00:05.657687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.355 [2024-11-20 00:00:05.657704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.355 [2024-11-20 00:00:05.657722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.612 [2024-11-20 00:00:05.670643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.612 [2024-11-20 00:00:05.671150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-20 00:00:05.671191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.612 [2024-11-20 00:00:05.671212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.612 [2024-11-20 00:00:05.671459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.612 [2024-11-20 00:00:05.671707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.612 [2024-11-20 00:00:05.671732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.612 [2024-11-20 00:00:05.671749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.612 [2024-11-20 00:00:05.671766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.612 [2024-11-20 00:00:05.684659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.612 [2024-11-20 00:00:05.685234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-20 00:00:05.685278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.612 [2024-11-20 00:00:05.685311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.612 [2024-11-20 00:00:05.685559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.612 [2024-11-20 00:00:05.685808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.612 [2024-11-20 00:00:05.685832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.612 [2024-11-20 00:00:05.685849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.612 [2024-11-20 00:00:05.685866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.612 [2024-11-20 00:00:05.698763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.612 [2024-11-20 00:00:05.699289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-20 00:00:05.699330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.612 [2024-11-20 00:00:05.699350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.612 [2024-11-20 00:00:05.699596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.612 [2024-11-20 00:00:05.699844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.612 [2024-11-20 00:00:05.699868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.612 [2024-11-20 00:00:05.699885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.612 [2024-11-20 00:00:05.699901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.612 [2024-11-20 00:00:05.712790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.612 [2024-11-20 00:00:05.713352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-20 00:00:05.713394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.612 [2024-11-20 00:00:05.713415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.612 [2024-11-20 00:00:05.713661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.612 [2024-11-20 00:00:05.713909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.612 [2024-11-20 00:00:05.713935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.612 [2024-11-20 00:00:05.713953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.612 [2024-11-20 00:00:05.713970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.612 [2024-11-20 00:00:05.726884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.612 [2024-11-20 00:00:05.727432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-20 00:00:05.727473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.612 [2024-11-20 00:00:05.727495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.612 [2024-11-20 00:00:05.727740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.612 [2024-11-20 00:00:05.728007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.612 [2024-11-20 00:00:05.728031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.612 [2024-11-20 00:00:05.728048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.612 [2024-11-20 00:00:05.728066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.612 [2024-11-20 00:00:05.740943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.612 [2024-11-20 00:00:05.741374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-20 00:00:05.741415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.612 [2024-11-20 00:00:05.741433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.612 [2024-11-20 00:00:05.741674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.612 [2024-11-20 00:00:05.741918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.612 [2024-11-20 00:00:05.741942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.612 [2024-11-20 00:00:05.741957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.612 [2024-11-20 00:00:05.741973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.612 [2024-11-20 00:00:05.754653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.612 [2024-11-20 00:00:05.754995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-20 00:00:05.755023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.612 [2024-11-20 00:00:05.755039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.612 [2024-11-20 00:00:05.755262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.612 [2024-11-20 00:00:05.755483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.612 [2024-11-20 00:00:05.755504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.612 [2024-11-20 00:00:05.755518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.612 [2024-11-20 00:00:05.755530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.612 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:31.612 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:31.612 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:31.612 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:31.612 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.613 [2024-11-20 00:00:05.768333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.613 [2024-11-20 00:00:05.768722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-20 00:00:05.768751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.613 [2024-11-20 00:00:05.768769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.613 [2024-11-20 00:00:05.769007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.613 [2024-11-20 00:00:05.769255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.613 [2024-11-20 00:00:05.769278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.613 [2024-11-20 00:00:05.769294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.613 [2024-11-20 00:00:05.769307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.613 [2024-11-20 00:00:05.781876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.613 [2024-11-20 00:00:05.782254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-20 00:00:05.782283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.613 [2024-11-20 00:00:05.782299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.613 [2024-11-20 00:00:05.782514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.613 [2024-11-20 00:00:05.782741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.613 [2024-11-20 00:00:05.782762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.613 [2024-11-20 00:00:05.782775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.613 [2024-11-20 00:00:05.782788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.613 [2024-11-20 00:00:05.795290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.613 [2024-11-20 00:00:05.795430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:31.613 [2024-11-20 00:00:05.795658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-20 00:00:05.795686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.613 [2024-11-20 00:00:05.795702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.613 [2024-11-20 00:00:05.795916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.613 [2024-11-20 00:00:05.796171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.613 [2024-11-20 00:00:05.796194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.613 [2024-11-20 00:00:05.796208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.613 [2024-11-20 00:00:05.796221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.613 [2024-11-20 00:00:05.808905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.613 [2024-11-20 00:00:05.809312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-20 00:00:05.809342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.613 [2024-11-20 00:00:05.809358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.613 [2024-11-20 00:00:05.809601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.613 [2024-11-20 00:00:05.809815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.613 [2024-11-20 00:00:05.809836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.613 [2024-11-20 00:00:05.809850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.613 [2024-11-20 00:00:05.809863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.613 [2024-11-20 00:00:05.822455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.613 [2024-11-20 00:00:05.822787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-20 00:00:05.822814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.613 [2024-11-20 00:00:05.822830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.613 [2024-11-20 00:00:05.823054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.613 [2024-11-20 00:00:05.823299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.613 [2024-11-20 00:00:05.823321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.613 [2024-11-20 00:00:05.823335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.613 [2024-11-20 00:00:05.823348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.613 Malloc0 00:35:31.613 [2024-11-20 00:00:05.835888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.613 [2024-11-20 00:00:05.836406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-20 00:00:05.836440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.613 [2024-11-20 00:00:05.836458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.613 [2024-11-20 00:00:05.836679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.613 [2024-11-20 00:00:05.836901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.613 [2024-11-20 00:00:05.836924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.613 [2024-11-20 00:00:05.836939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.613 [2024-11-20 00:00:05.836967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.613 [2024-11-20 00:00:05.849526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.613 [2024-11-20 00:00:05.849888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-20 00:00:05.849915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd25cf0 with addr=10.0.0.2, port=4420 00:35:31.613 [2024-11-20 00:00:05.849931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd25cf0 is same with the state(6) to be set 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:31.613 3680.83 IOPS, 14.38 MiB/s [2024-11-19T23:00:05.925Z] [2024-11-20 00:00:05.851705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd25cf0 (9): Bad file descriptor 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.613 [2024-11-20 00:00:05.851938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.613 [2024-11-20 00:00:05.851960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.613 [2024-11-20 00:00:05.851974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.613 [2024-11-20 00:00:05.851987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.613 [2024-11-20 00:00:05.855471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.613 00:00:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 337948 00:35:31.613 [2024-11-20 00:00:05.863006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.613 [2024-11-20 00:00:05.886468] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:33.938 4283.57 IOPS, 16.73 MiB/s [2024-11-19T23:00:09.183Z] 4787.25 IOPS, 18.70 MiB/s [2024-11-19T23:00:10.114Z] 5161.22 IOPS, 20.16 MiB/s [2024-11-19T23:00:11.042Z] 5473.20 IOPS, 21.38 MiB/s [2024-11-19T23:00:11.974Z] 5720.18 IOPS, 22.34 MiB/s [2024-11-19T23:00:12.906Z] 5942.33 IOPS, 23.21 MiB/s [2024-11-19T23:00:14.282Z] 6115.54 IOPS, 23.89 MiB/s [2024-11-19T23:00:15.219Z] 6269.64 IOPS, 24.49 MiB/s [2024-11-19T23:00:15.219Z] 6405.73 IOPS, 25.02 MiB/s 00:35:40.907 Latency(us) 00:35:40.907 [2024-11-19T23:00:15.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.907 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:40.907 Verification LBA range: start 0x0 length 0x4000 00:35:40.907 Nvme1n1 : 15.04 6386.98 24.95 8406.45 0.00 8601.67 825.27 46020.84 00:35:40.907 [2024-11-19T23:00:15.219Z] =================================================================================================================== 00:35:40.907 [2024-11-19T23:00:15.219Z] Total : 6386.98 24.95 8406.45 0.00 8601.67 825.27 46020.84 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:40.907 rmmod nvme_tcp 00:35:40.907 rmmod nvme_fabrics 00:35:40.907 rmmod nvme_keyring 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 338729 ']' 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 338729 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 338729 ']' 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 338729 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:40.907 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 338729 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 338729' 00:35:41.166 killing process with pid 338729 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 338729 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 338729 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:41.166 00:00:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:43.714 00:35:43.714 real 0m22.448s 00:35:43.714 user 0m58.973s 00:35:43.714 sys 0m4.577s 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.714 ************************************ 00:35:43.714 END TEST nvmf_bdevperf 00:35:43.714 ************************************ 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.714 ************************************ 00:35:43.714 START TEST nvmf_target_disconnect 00:35:43.714 ************************************ 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:43.714 * Looking for test storage... 00:35:43.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:43.714 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:43.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.715 --rc genhtml_branch_coverage=1 00:35:43.715 --rc genhtml_function_coverage=1 00:35:43.715 --rc genhtml_legend=1 00:35:43.715 --rc geninfo_all_blocks=1 00:35:43.715 --rc geninfo_unexecuted_blocks=1 00:35:43.715 00:35:43.715 ' 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:43.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.715 --rc genhtml_branch_coverage=1 00:35:43.715 --rc genhtml_function_coverage=1 00:35:43.715 --rc genhtml_legend=1 00:35:43.715 --rc geninfo_all_blocks=1 00:35:43.715 --rc geninfo_unexecuted_blocks=1 00:35:43.715 00:35:43.715 ' 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:43.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.715 --rc genhtml_branch_coverage=1 00:35:43.715 --rc genhtml_function_coverage=1 00:35:43.715 --rc genhtml_legend=1 00:35:43.715 --rc geninfo_all_blocks=1 00:35:43.715 --rc geninfo_unexecuted_blocks=1 00:35:43.715 00:35:43.715 ' 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:43.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.715 --rc genhtml_branch_coverage=1 00:35:43.715 --rc genhtml_function_coverage=1 00:35:43.715 --rc genhtml_legend=1 00:35:43.715 --rc geninfo_all_blocks=1 00:35:43.715 --rc geninfo_unexecuted_blocks=1 00:35:43.715 00:35:43.715 ' 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:43.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:43.715 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:43.716 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:43.716 00:00:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:45.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:45.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:45.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:45.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.637 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:35:45.638 00:35:45.638 --- 10.0.0.2 ping statistics --- 00:35:45.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.638 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:35:45.638 00:35:45.638 --- 10.0.0.1 ping statistics --- 00:35:45.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.638 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:45.638 ************************************ 00:35:45.638 START TEST nvmf_target_disconnect_tc1 00:35:45.638 ************************************ 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:45.638 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:45.897 [2024-11-20 00:00:19.986566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.897 [2024-11-20 00:00:19.986647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2267a90 with addr=10.0.0.2, port=4420 00:35:45.897 [2024-11-20 00:00:19.986684] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:45.897 [2024-11-20 00:00:19.986706] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:45.897 [2024-11-20 00:00:19.986721] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:45.897 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:45.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:45.897 Initializing NVMe Controllers 00:35:45.897 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:35:45.897 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:45.897 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:45.897 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:45.897 00:35:45.897 real 0m0.105s 00:35:45.897 user 0m0.055s 00:35:45.897 sys 0m0.050s 00:35:45.897 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:45.897 00:00:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:45.897 ************************************ 00:35:45.897 END TEST nvmf_target_disconnect_tc1 00:35:45.897 ************************************ 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:45.897 ************************************ 00:35:45.897 START TEST nvmf_target_disconnect_tc2 00:35:45.897 ************************************ 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=342394 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 342394 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 342394 ']' 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.897 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:45.897 [2024-11-20 00:00:20.104562] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:35:45.897 [2024-11-20 00:00:20.104664] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.897 [2024-11-20 00:00:20.194465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:46.155 [2024-11-20 00:00:20.250154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:46.155 [2024-11-20 00:00:20.250216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:46.155 [2024-11-20 00:00:20.250260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:46.155 [2024-11-20 00:00:20.250285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:46.155 [2024-11-20 00:00:20.250307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:46.155 [2024-11-20 00:00:20.252275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:46.155 [2024-11-20 00:00:20.252339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:46.155 [2024-11-20 00:00:20.252470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:46.155 [2024-11-20 00:00:20.252481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:46.155 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:46.155 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:46.155 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:46.155 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:46.155 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:46.155 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:46.155 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:46.155 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.155 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:46.413 Malloc0 00:35:46.413 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.413 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:46.413 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.413 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:46.413 [2024-11-20 00:00:20.500957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.413 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.413 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:46.413 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.413 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:46.413 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.413 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:46.414 [2024-11-20 00:00:20.529250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=342425 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:46.414 00:00:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:48.327 00:00:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 342394 00:35:48.327 00:00:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 [2024-11-20 00:00:22.553894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Read completed with error (sct=0, sc=8) 00:35:48.327 starting I/O failed 00:35:48.327 Write completed with error (sct=0, sc=8) 00:35:48.328 starting I/O failed 00:35:48.328 Read completed with error (sct=0, sc=8) 00:35:48.328 starting I/O failed 00:35:48.328 Read completed with error (sct=0, sc=8) 00:35:48.328 starting I/O failed 00:35:48.328 Read completed with error (sct=0, sc=8) 00:35:48.328 starting I/O failed 00:35:48.328 [2024-11-20 00:00:22.554186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.328 [2024-11-20 00:00:22.554350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.554381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.554514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.554539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.554670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.554695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.554825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.554852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.555001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.555027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.555144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.555171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.555263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.555289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.555400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.555425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.555542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.555568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.555690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.555716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.555835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.555861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.555973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.556016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.556112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.556139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.556230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.556256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.556352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.556378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.556467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.556498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.556613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.556639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.556734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.556760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.556856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.556882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.556985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.557012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.557115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.557142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.557234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.557260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.557389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.557415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.557544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.557569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.557721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.557747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.557865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.557891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.558041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.558067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.558169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.558195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.558288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.558314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.558417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.558443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.558539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.558565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.328 [2024-11-20 00:00:22.558642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.328 [2024-11-20 00:00:22.558668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.328 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.558762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.558788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.558931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.558957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.559077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.559104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.559228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.559254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.559348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.559373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.559492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.559518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.559639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.559665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.559775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.559800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.559884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.559910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.560051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.560109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.560205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.560239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.560349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.560376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.560506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.560533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.560620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.560646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.560737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.560763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.560883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.560911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.561025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.561051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.561181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.561207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.561310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.561336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.561483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.561509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.561606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.561632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.561758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.561785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.561883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.561909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.562029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.562055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.562166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.562192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.562286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.562312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.562395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.562421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.562547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.562575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.562694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.562720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.562802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.562828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.562976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.329 [2024-11-20 00:00:22.563002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.329 qpair failed and we were unable to recover it. 00:35:48.329 [2024-11-20 00:00:22.563095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.563122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.563216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.563241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.563337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.563363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.563477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.563503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.563588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.563615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.563699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.563724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.563820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.563850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.563966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.563993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.564128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.564154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.564246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.564272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.564399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.564425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.564538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.564564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.564709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.564735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.564827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.564853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.565057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.565116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.565262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.565289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.565387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.565414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.565513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.565541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.565639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.565666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.565784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.565811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.565934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.565961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.566080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.566106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.566206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.566232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.566328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.566353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.566433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.566459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.566579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.566604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.566733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.566758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.566854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.566880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.567025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.567079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.567172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.567198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.567292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.567318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.330 qpair failed and we were unable to recover it. 00:35:48.330 [2024-11-20 00:00:22.567407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.330 [2024-11-20 00:00:22.567433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.331 qpair failed and we were unable to recover it. 00:35:48.331 [2024-11-20 00:00:22.567534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.331 [2024-11-20 00:00:22.567559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.331 qpair failed and we were unable to recover it. 00:35:48.331 [2024-11-20 00:00:22.567678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.331 [2024-11-20 00:00:22.567708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.331 qpair failed and we were unable to recover it. 00:35:48.331 [2024-11-20 00:00:22.567828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.331 [2024-11-20 00:00:22.567854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.331 qpair failed and we were unable to recover it. 00:35:48.331 [2024-11-20 00:00:22.568186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.331 [2024-11-20 00:00:22.568212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.331 qpair failed and we were unable to recover it. 00:35:48.331 [2024-11-20 00:00:22.568297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.331 [2024-11-20 00:00:22.568323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.331 qpair failed and we were unable to recover it. 00:35:48.331 [2024-11-20 00:00:22.568423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.331 [2024-11-20 00:00:22.568449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.331 qpair failed and we were unable to recover it. 00:35:48.331 [2024-11-20 00:00:22.568567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.331 [2024-11-20 00:00:22.568593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.331 qpair failed and we were unable to recover it. 00:35:48.331 [2024-11-20 00:00:22.568739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.331 [2024-11-20 00:00:22.568765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.331 qpair failed and we were unable to recover it. 00:35:48.331 [2024-11-20 00:00:22.568886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.331 [2024-11-20 00:00:22.568912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.331 qpair failed and we were unable to recover it. 00:35:48.331 [2024-11-20 00:00:22.569023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.331 [2024-11-20 00:00:22.569048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.331 qpair failed and we were unable to recover it. 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 [2024-11-20 00:00:22.569356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Read completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.331 Write completed with error (sct=0, sc=8) 00:35:48.331 starting I/O failed 00:35:48.332 Write completed with error (sct=0, sc=8) 00:35:48.332 starting I/O failed 00:35:48.332 Read completed with error (sct=0, sc=8) 00:35:48.332 starting I/O failed 00:35:48.332 [2024-11-20 00:00:22.569631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:48.332 [2024-11-20 00:00:22.569795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.569839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.569988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.570022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.570149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.570177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.570277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.570304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.570393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.570426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.570550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.570577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.570686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.570729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.570838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.570866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.570985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.571011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.571120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.571148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.571279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.571306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.571404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.571431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.571557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.571585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.571705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.571733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.571825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.571852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.572004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.572030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.572157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.572184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.572311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.572338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.572512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.572540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.572671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.572699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.572815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.572842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.572941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.572969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.573098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.573127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.573263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.332 [2024-11-20 00:00:22.573310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.332 qpair failed and we were unable to recover it. 00:35:48.332 [2024-11-20 00:00:22.573447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.573475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.573602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.573631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.573750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.573777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.573900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.573926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.574023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.574050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.574179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.574206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.574326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.574353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.574450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.574478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.574603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.574630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.574753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.574780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.574893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.574919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.575035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.575062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.575170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.575197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.575307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.575333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.575419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.575445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.575560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.575586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.575704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.575731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.575821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.575848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.575997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.576023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.576129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.576161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.576291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.576323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.576484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.576511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.576656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.576683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.576833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.576860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.577039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.577066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.577204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.577231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.577332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.577359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.577481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.577508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.577619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.577645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.577769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.577795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.577881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.577908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.578006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.578034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.578159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.333 [2024-11-20 00:00:22.578186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.333 qpair failed and we were unable to recover it. 00:35:48.333 [2024-11-20 00:00:22.578320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.578366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.578552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.578578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.578675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.578702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.578795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.578822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.578942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.578969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.579118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.579146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.579236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.579280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.579416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.579445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.579574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.579619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.579738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.579764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.579881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.579907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.580058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.580094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.580184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.580210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.580327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.580353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.580568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.580595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.580739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.580768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.580903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.580930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.581051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.581086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.581176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.581203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.581294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.581323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.581451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.581478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.581644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.581673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.581805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.581837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.581994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.582022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.582140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.582179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.582303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.582330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.582451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.582477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.582596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.582626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.582715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.582742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.582826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.582852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.583001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.583027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.334 qpair failed and we were unable to recover it. 00:35:48.334 [2024-11-20 00:00:22.583136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.334 [2024-11-20 00:00:22.583163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.583264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.583291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.583413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.583438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.583563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.583589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.583680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.583706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.583799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.583827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.583948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.583978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.584081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.584109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.584204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.584231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.584325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.584352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.584485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.584512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.584606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.584635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.584784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.584812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.584931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.584958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.585054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.585088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.585187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.585214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.585338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.585364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.585478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.585505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.585626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.585654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.585745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.585771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.585863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.585889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.586002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.586028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.586121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.586148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.586336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.586365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.586491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.586519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.586653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.586683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.586779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.586809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.586971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.586997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.587089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.587117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.587209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.587236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.587326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.587371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.587513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.587543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.335 [2024-11-20 00:00:22.587667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.335 [2024-11-20 00:00:22.587697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.335 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.587827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.587856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.588002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.588029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.588159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.588187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.588308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.588343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.588549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.588576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.588719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.588745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.588867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.588893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.589012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.589039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.589147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.589175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.589285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.589311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.589440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.589466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.589665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.589691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.589813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.589839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.589960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.589987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.590098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.590125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.590225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.590252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.590354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.590381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.590533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.590560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.590675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.590701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.590827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.590854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.591011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.591050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.591176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.591216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.591349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.591377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.591511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.591541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.591701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.591728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.591827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.591853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.591947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.591974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.592090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.592117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.592249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.592276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.592388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.592430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.592605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.592635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.592733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.592761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.592877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.592904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.593048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.593080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.593210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.336 [2024-11-20 00:00:22.593237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.336 qpair failed and we were unable to recover it. 00:35:48.336 [2024-11-20 00:00:22.593365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.593394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.593521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.593550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.593718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.593746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.593857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.593901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.594057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.594108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.594235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.594262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.594385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.594411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.594539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.594566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.594688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.594719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.594840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.594867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.594975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.595014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.595189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.595218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.595371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.595398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.595554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.595581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.595705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.595732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.595861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.595906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.596001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.596029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.596138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.596165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.596286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.596312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.596421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.596450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.596590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.596621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.596747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.596776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.596941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.596980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.597109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.597138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.597275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.597319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.597479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.597506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.597661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.597687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.597875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.597901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.597995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.598023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.598151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.598179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.598270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.598297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.598417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.598444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.337 qpair failed and we were unable to recover it. 00:35:48.337 [2024-11-20 00:00:22.598598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.337 [2024-11-20 00:00:22.598625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.598820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.598847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.598961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.598988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.599143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.599171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.599261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.599287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.599405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.599431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.599567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.599611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.599802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.599832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.599964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.599993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.600148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.600176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.600263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.600306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.600511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.600541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.600696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.600722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.600844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.600871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.601036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.601066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.601193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.601221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.601306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.601338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.601423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.601467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.601602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.601632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.601761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.601791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.601921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.601950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.602106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.602134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.602265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.602303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.602407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.602435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.338 [2024-11-20 00:00:22.602572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.338 [2024-11-20 00:00:22.602616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.338 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.602714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.602741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.602890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.602916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.603032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.603059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.603196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.603224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.603337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.603364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.603490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.603517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.603606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.603633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.603821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.603863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.604027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.604056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.604165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.604196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.604341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.604367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.604459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.604486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.604582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.604608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.604739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.604769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.604930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.604959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.605130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.605159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.605263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.605290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.605412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.605438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.605558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.605585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.605675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.605719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.605837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.605865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.605983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.606010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.606111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.606138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.606244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.606272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.606409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.606455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.606593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.606637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.606739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.606766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.606886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.606913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.607030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.607056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.607267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.607294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.607411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.607437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.607586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.607613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.607733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.339 [2024-11-20 00:00:22.607760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.339 qpair failed and we were unable to recover it. 00:35:48.339 [2024-11-20 00:00:22.607881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.607910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.608054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.608103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.608250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.608282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.608434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.608462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.608582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.608609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.608732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.608758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.608881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.608908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.609043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.609090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.609219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.609248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.609345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.609373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.609490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.609516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.609601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.609643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.609799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.609827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.609948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.609975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.610157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.610185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.610338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.610364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.610456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.610481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.610635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.610664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.610830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.610858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.610951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.610978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.611094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.611122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.611214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.611241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.611385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.611411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.611539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.611567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.611767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.611794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.611945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.611978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.612127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.612155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.612334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.612361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.612481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.612507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.612672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.612715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.612886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.612912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.613034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.613061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.613203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.340 [2024-11-20 00:00:22.613248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.340 qpair failed and we were unable to recover it. 00:35:48.340 [2024-11-20 00:00:22.613420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.613465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.613595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.613638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.613757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.613784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.613872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.613898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.614017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.614044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.614166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.614193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.614300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.614327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.614463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.614489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.614617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.614643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.614791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.614817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.614904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.614932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.615090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.615117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.615279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.615322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.615413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.615441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.615593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.615620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.615710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.615736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.615857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.615884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.616089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.616116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.616293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.616320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.616454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.616481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.616606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.616634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.616758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.616784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.616879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.616905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.617027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.617054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.617179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.617205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.617326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.617353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.617498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.617525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.617667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.617693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.617793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.617832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.617968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.617997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.618122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.618151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.618301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.618328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.618450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.618482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.341 [2024-11-20 00:00:22.618634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.341 [2024-11-20 00:00:22.618678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.341 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.618824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.618855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.619008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.619034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.619185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.619212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.619300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.619344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.619478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.619507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.619667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.619699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.619798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.619840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.619928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.619955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.620083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.620112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.620243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.620272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.620432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.620462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.620574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.620606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.620769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.620800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.620946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.620973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.621125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.621153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.621275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.621301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.621421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.621448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.621568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.621596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.621720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.621749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.621870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.621896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.621992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.622021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.622130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.622157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.622281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.622308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.622397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.622424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.622511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.622538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.622696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.622723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.622838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.622865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.623012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.623039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.623168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.623195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.623309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.623335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.342 [2024-11-20 00:00:22.623447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.342 [2024-11-20 00:00:22.623473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.342 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.623591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.623620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.623740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.623769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.623897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.623926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.624097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.624125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.624270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.624296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.624435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.624464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.624635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.624662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.624747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.624778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.624918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.624943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.625088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.625131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.625288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.625316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.625438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.625465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.625616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.625648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.625754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.625785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.625912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.625941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.626082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.626127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.626244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.626271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.626380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.626419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.626588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.626634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.626773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.626818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.626905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.626932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.627057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.627098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.627220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.627246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.627340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.627366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.627457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.627483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.627579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.627607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.627699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.627727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.627866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.627893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.628007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.343 [2024-11-20 00:00:22.628033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.343 qpair failed and we were unable to recover it. 00:35:48.343 [2024-11-20 00:00:22.628159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.628186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.344 [2024-11-20 00:00:22.628309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.628336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.344 [2024-11-20 00:00:22.628452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.628479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.344 [2024-11-20 00:00:22.628602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.628630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.344 [2024-11-20 00:00:22.628749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.628776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.344 [2024-11-20 00:00:22.628903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.628932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.344 [2024-11-20 00:00:22.629093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.629119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.344 [2024-11-20 00:00:22.629220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.629247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.344 [2024-11-20 00:00:22.629353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.629398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.344 [2024-11-20 00:00:22.629555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.629581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.344 [2024-11-20 00:00:22.629728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.629755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.344 [2024-11-20 00:00:22.629862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.629893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.344 [2024-11-20 00:00:22.629990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.344 [2024-11-20 00:00:22.630021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.344 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.630233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.630261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.630395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.630425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.630516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.630545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.630651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.630681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.630839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.630869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.630969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.631008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.631184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.631211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.631363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.631390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.631472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.631499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.631624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.631650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.631738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.631782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.631912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.631942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.632090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.632117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.632210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.606 [2024-11-20 00:00:22.632237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.606 qpair failed and we were unable to recover it. 00:35:48.606 [2024-11-20 00:00:22.632367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.632396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.632539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.632569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.632785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.632813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.632956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.632982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.633102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.633128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.633221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.633246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.633401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.633428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.633632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.633662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.633791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.633817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.633933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.633959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.634080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.634107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.634196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.634224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.634315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.634342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.634505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.634535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.634703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.634733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.634891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.634921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.635051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.635110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.635247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.635277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.635446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.635478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.635609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.635639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.635743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.635774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.635915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.635942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.636063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.636103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.636224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.636250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.636373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.636415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.636589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.636615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.636729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.636755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.636847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.636875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.636998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.637024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.637123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.637150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.637275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.637302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.637474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.637526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.637648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.637674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.607 qpair failed and we were unable to recover it. 00:35:48.607 [2024-11-20 00:00:22.637794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.607 [2024-11-20 00:00:22.637822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.637933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.637963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.638119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.638158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.638282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.638310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.638460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.638506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.638641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.638671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.638846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.638878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.639005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.639034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.639207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.639235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.639370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.639399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.639547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.639578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.639726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.639753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.639880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.639908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.640011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.640038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.640164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.640193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.640288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.640314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.640493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.640538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.640712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.640754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.640844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.640870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.640988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.641015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.641138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.641164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.641309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.641334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.641512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.641579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.641788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.641819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.641956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.641989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.642176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.642212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.642313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.642342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.642507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.642561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.642721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.642751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.608 qpair failed and we were unable to recover it. 00:35:48.608 [2024-11-20 00:00:22.642862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.608 [2024-11-20 00:00:22.642892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.643052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.643094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.643235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.643262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.643373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.643399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.643482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.643509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.643680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.643730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.643844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.643871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.644014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.644045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.644197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.644223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.644368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.644400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.644519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.644564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.644661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.644690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.644888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.644917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.645082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.645123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.645345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.645403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.645607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.645698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.645841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.645910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.646037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.646092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.646218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.646247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.646359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.646390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.646500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.646531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.646657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.646687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.646786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.646816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.646996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.647024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.647120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.647148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.647268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.647295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.647414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.647442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.647524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.647552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.647694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.647722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.647850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.609 [2024-11-20 00:00:22.647880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.609 qpair failed and we were unable to recover it. 00:35:48.609 [2024-11-20 00:00:22.648028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.648097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.648236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.648276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.648390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.648439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.648546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.648576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.648798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.648827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.648923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.648951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.649128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.649159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.649305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.649330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.649421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.649463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.649570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.649599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.649744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.649769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.649891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.649916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.650077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.650106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.650269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.650294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.650425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.650453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.650553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.650581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.650742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.650770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.650942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.650967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.651064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.651095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.651188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.651215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.651344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.651369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.651520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.651545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.651649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.651678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.651821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.651849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.652019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.652062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.652234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.652272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.652484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.652512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.652652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.652695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.652844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.652872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.653010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.653035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.653134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.653161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.610 [2024-11-20 00:00:22.653267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.610 [2024-11-20 00:00:22.653312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.610 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.653482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.653524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.653657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.653724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.653969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.654020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.654155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.654185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.654318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.654344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.654435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.654461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.654554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.654581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.654702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.654730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.654855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.654880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.654976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.655002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.655097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.655124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.655256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.655294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.655462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.655490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.655635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.655661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.655810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.655869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.655997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.656024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.656130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.656157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.656281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.656306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.656423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.656449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.656545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.656571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.656671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.656698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.656795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.656823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.656946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.656971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.657091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.657117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.657244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.657270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.657379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.657417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.657513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.657558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.657702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.657746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.657886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.657928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.658050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.611 [2024-11-20 00:00:22.658081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.611 qpair failed and we were unable to recover it. 00:35:48.611 [2024-11-20 00:00:22.658204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.658230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.658350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.658375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.658498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.658523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.658610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.658637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.658762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.658789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.658887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.658917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.659062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.659094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.659182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.659209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.659379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.659426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.659535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.659565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.659687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.659716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.659850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.659917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.660054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.660101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.660299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.660326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.660476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.660502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.660620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.660648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.660810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.660864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.660999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.661024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.661148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.661175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.661322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.661348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.661437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.661463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.661564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.661589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.661729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.661776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.612 [2024-11-20 00:00:22.661898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.612 [2024-11-20 00:00:22.661953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.612 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.662081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.662110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.662240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.662267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.662405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.662434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.662559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.662587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.662701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.662730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.662839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.662866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.662996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.663022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.663152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.663178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.663305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.663352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.663493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.663536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.663629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.663656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.663846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.663901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.664066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.664097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.664216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.664242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.664371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.664397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.664524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.664550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.664645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.664671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.664760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.664786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.664923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.664961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.665128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.665157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.665307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.665334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.665451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.665477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.665564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.665589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.665704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.665729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.665844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.665870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.666001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.666039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.666203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.666231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.666344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.613 [2024-11-20 00:00:22.666392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.613 qpair failed and we were unable to recover it. 00:35:48.613 [2024-11-20 00:00:22.666516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.666543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.666634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.666661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.666773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.666803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.666924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.666961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.667083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.667112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.667249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.667279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.667400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.667429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.667581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.667609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.667729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.667756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.667878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.667907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.668039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.668078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.668227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.668253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.668402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.668430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.668582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.668625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.668747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.668774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.668900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.668927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.669019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.669045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.669148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.669176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.669350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.669399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.669535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.669578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.669719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.669762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.669857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.669883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.670027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.670053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.670225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.670253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.670346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.670373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.670490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.670516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.670696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.670727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.670851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.614 [2024-11-20 00:00:22.670878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.614 qpair failed and we were unable to recover it. 00:35:48.614 [2024-11-20 00:00:22.670973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.092368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.092641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.092669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.092778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.092819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.092940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.092965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.093092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.093121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.093232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.093257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.093387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.093412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.093526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.093551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.093672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.093697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.093842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.093867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.093949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.093974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.094110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.094152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.094279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.094305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.094446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.094471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.094611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.094638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.094799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.094827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.094972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.095000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.095167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.095194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.095343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.095378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.095502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.095529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.095674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.095703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.095858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.095887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.096028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.096055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.096193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.096220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.096317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.096343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.096462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.096494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.882 qpair failed and we were unable to recover it. 00:35:48.882 [2024-11-20 00:00:23.096607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-11-20 00:00:23.096633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.096816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.096842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.096968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.096994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.097124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.097152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.097268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.097295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.097388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.097415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.097501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.097527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.097649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.097676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.097793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.097820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.097967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.097996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.098095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.098125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.098294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.098320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.098444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.098488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.098645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.098672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.098766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.098793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.099014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.099040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.099214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.099244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.099382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.099411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.099547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.099576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.099685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.099711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.099835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.099862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.099976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.100002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.100131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.100158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.100275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.100302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.100447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.100491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.100611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.100641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.100739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.100768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.100893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.100921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.101043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.101082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.883 [2024-11-20 00:00:23.101208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.883 [2024-11-20 00:00:23.101235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.883 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.101400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.101426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.101570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.101596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.101721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.101747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.101868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.101894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.102077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.102105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.102223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.102250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.102339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.102368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.102488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.102515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.102604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.102645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.102767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.102794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.102944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.102971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.103105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.103135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.103302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.103331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.103454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.103481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.103593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.103620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.103717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.103743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.103865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.103892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.103986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.104013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.104166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.104193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.104327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.104367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.104512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.104538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.104653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.104680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.104796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.104823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.104914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.104941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.105095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.105122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.105267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.105294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.105422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.105449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.884 [2024-11-20 00:00:23.105567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.884 [2024-11-20 00:00:23.105593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.884 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.105752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.105778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.105901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.105928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.106020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.106046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.106180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.106207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.106356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.106390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.106488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.106515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.106608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.106634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.106721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.106747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.106869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.106895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.107040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.107080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.107181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.107208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.107300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.107326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.107456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.107485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.107625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.107652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.107754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.107780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.107902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.107929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.108082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.108109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.108199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.108225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.108319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.108346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.108434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.108461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.108577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.108607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.108741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.108768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.108887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.108914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.109081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.109108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.109242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.109269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.109370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.109406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.109531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.109557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.109704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.109730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.109846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.109873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.110019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.110045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.110145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.110172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.110290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.885 [2024-11-20 00:00:23.110317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.885 qpair failed and we were unable to recover it. 00:35:48.885 [2024-11-20 00:00:23.110434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.110460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.110573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.110599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.110744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.110770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.110858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.110884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.110994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.111025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.111149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.111176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.111299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.111325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.111462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.111492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.111647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.111676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.111808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.111835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.111956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.111983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.112084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.112112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.112263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.112290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.112391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.112418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.112565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.112592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.112738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.112765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.112927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.112969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.113113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.113140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.113262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.113289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.113465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.113492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.113606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.113633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.113761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.113788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.113906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.113932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.114056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.114099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.114245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.114271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.114388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.114415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.114508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.114535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.114652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.114678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.114768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.114795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.114939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.114966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.115091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.115119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.115267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.115298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.115424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.115451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.115597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.886 [2024-11-20 00:00:23.115623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.886 qpair failed and we were unable to recover it. 00:35:48.886 [2024-11-20 00:00:23.115716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.115742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.115888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.115915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.115996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.116040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.116168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.116194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.116315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.116342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.116463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.116489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.116657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.116683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.116771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.116798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.116897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.116923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.117084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.117115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.117212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.117241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.117364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.117391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.117481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.117507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.117637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.117667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.117800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.117830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.117995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.118021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.118146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.118173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.118323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.118350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.118508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.118534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.118652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.118678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.118777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.118804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.118909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.118938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.119060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.119098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.119230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.887 [2024-11-20 00:00:23.119257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.887 qpair failed and we were unable to recover it. 00:35:48.887 [2024-11-20 00:00:23.119346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.119372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.119490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.119517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.119606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.119633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.119732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.119758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.119844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.119871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.119967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.119993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.120106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.120133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.120216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.120243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.120344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.120371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.120475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.120501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.120597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.120623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.120737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.120764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.120889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.120915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.121021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.121050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.121199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.121229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.121400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.121427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.121546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.121572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.121718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.121744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.121825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.121851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.121969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.121995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.122117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.122145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.122289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.122319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.122417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.122447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.122581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.122608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.122696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.122723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.122892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.122922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.123097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.123124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.123202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.123228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.123358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.888 [2024-11-20 00:00:23.123385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.888 qpair failed and we were unable to recover it. 00:35:48.888 [2024-11-20 00:00:23.123525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.123554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.123672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.123701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.123867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.123893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.124057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.124095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.124187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.124216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.124358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.124384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.124506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.124532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.124651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.124677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.124816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.124846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.125019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.125046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.125231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.125272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.125418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.125448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.125597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.125630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.125864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.125891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.126010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.126037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.126167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.126196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.126287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.126313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.126445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.126472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.126568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.126596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.126741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.126768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.126884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.126912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.127035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.127062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.127190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.127217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.127416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.127444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.127598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.127625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.127821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.127849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.128001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.128029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.128182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.128228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.889 qpair failed and we were unable to recover it. 00:35:48.889 [2024-11-20 00:00:23.128349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.889 [2024-11-20 00:00:23.128376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.128494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.128521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.128614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.128643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.128758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.128785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.128897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.128924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.129078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.129105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.129228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.129276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.129393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.129421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.129549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.129575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.129685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.129715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.129870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.129916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.130035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.130063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.130156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.130185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.130385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.130430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.130586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.130614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.130731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.130759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.130881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.130908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.131030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.131059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.131192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.131221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.131365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.131407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.131527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.131554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.131683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.131710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.131831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.131858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.132009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.132036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.132165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.132198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.132299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.132326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.132462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.132490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.132611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.132638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.132762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.132789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.132909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.890 [2024-11-20 00:00:23.132936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.890 qpair failed and we were unable to recover it. 00:35:48.890 [2024-11-20 00:00:23.133051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.133084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.133209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.133236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.133328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.133355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.133469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.133495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.133636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.133683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.133824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.133866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.133986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.134014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.134113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.134140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.134241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.134268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.134362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.134390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.134508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.134535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.134653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.134680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.134799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.134828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.134919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.134946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.135062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.135095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.135191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.135234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.135376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.135403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.135550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.135576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.135717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.135746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.135865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.135894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.136059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.136093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.136182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.136215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.136369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.136396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.136568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.136613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.136703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.136731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.136825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.136853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.137004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.137031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.137132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.137161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.137260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.137287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.137464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.137492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.137638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.137664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.891 [2024-11-20 00:00:23.137815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.891 [2024-11-20 00:00:23.137842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.891 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.137967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.137995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.138128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.138156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.138285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.138312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.138464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.138491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.138614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.138641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.138736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.138763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.138905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.138932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.139050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.139088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.139238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.139267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.139389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.139418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.139530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.139560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.139684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.139713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.139849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.139879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.140021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.140048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.140203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.140229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.140313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.140340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.140558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.140605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.140750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.140778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.140942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.140972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.141076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.141121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.141237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.141264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.141386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.141413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.141526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.141561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.141715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.141744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.141923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.141967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.142089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.142115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.142216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.142242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.142369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.142396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.142484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.142510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.142630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.142657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.142775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.142802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.892 [2024-11-20 00:00:23.142919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.892 [2024-11-20 00:00:23.142945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.892 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.143033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.143076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.143198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.143224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.143338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.143375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.143485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.143514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.143647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.143674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.143795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.143821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.143952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.143981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.144155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.144182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.144273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.144299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.144398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.144424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.144534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.144561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.144678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.144708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.144847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.144876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.145017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.145043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.145167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.145193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.145277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.145304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.145457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.145483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.145606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.145632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.145780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.145806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.145940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.145984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.146112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.146140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.146238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.146264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.146401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.146430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.146551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.893 [2024-11-20 00:00:23.146580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.893 qpair failed and we were unable to recover it. 00:35:48.893 [2024-11-20 00:00:23.146687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.146730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.146851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.146879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.147048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.147113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.147239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.147266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.147366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.147409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.147551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.147577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.147668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.147694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.147779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.147805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.147917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.147943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.148064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.148097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.148242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.148269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.148380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.148409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.148539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.148583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.148704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.148731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.148878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.148908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.149054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.149087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.149213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.149239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.149347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.149381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.149514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.149544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.149668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.149697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.149812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.149842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.149971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.150015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.150137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.150164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.150258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.150284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.150372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.150409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.150523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.150549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.150667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.150693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.150782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.150808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.150894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.150921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.150996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.151022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.151138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.151166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.151255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.894 [2024-11-20 00:00:23.151282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.894 qpair failed and we were unable to recover it. 00:35:48.894 [2024-11-20 00:00:23.151431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.151457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.151594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.151623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.151757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.151786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.151907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.151936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.152085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.152112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.152211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.152237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.152358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.152387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.152519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.152549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.152707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.152736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.152898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.152928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.153085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.153131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.153246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.153273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.153426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.153453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.153610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.153651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.153777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.153806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.153936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.153963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.154052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.154087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.154206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.154232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.154324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.154351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.154480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.154509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.154633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.154663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.154798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.154842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.154998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.155028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.155171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.155202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.155314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.155341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.155506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.155532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.155647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.155674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.155803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.155836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.155924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.155950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.156084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.156111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.156233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.156262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.156350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.895 [2024-11-20 00:00:23.156386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.895 qpair failed and we were unable to recover it. 00:35:48.895 [2024-11-20 00:00:23.156508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.156535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.156683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.156727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.156844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.156871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.157797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.157843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.158004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.158034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.158213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.158241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.158363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.158390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.158511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.158537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.158650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.158680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.158805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.158834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.158970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.158997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.159127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.159155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.159259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.159289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.159457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.159484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.159630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.159657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.159783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.159809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.159905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.159932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.160054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.160096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.160219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.160253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.160366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.160398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.160487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.160514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.160621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.160652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.160803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.160830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.160977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.161003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.161126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.161170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.161299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.161326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.161477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.161503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.162375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.162410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.162556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.162584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.162717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.162746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.162863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.162890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.163012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.163038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.896 [2024-11-20 00:00:23.163237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.896 [2024-11-20 00:00:23.163268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.896 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.163405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.163435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.163547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.163574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.163678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.163705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.163797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.163825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.163936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.163975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.164154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.164188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.164359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.164396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.164519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.164565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.164714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.164742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.164870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.164901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.165035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.165080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.165207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.165235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.165383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.165438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.166250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.166285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.166462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.166490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.166579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.166608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.166790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.166817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.166941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.166970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.167117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.167146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.167239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.167267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.167439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.167469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.168383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.168417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.168602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.168630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.168732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.168759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.168910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.168937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.169088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.169119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.169254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.169281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.169405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.169433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.169554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.169582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.169716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.169748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.169913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.897 [2024-11-20 00:00:23.169944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.897 qpair failed and we were unable to recover it. 00:35:48.897 [2024-11-20 00:00:23.170089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.170121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.170266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.170295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.170423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.170451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.170569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.170615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.170761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.170788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.170903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.170931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.171060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.171110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.171209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.171240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.171382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.171423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.171561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.171589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.171712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.171739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.171880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.171911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.172021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.172050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.172213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.172241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.172371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.172399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.172549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.172592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.172726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.172756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.172907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.172934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.173054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.173111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.173276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.173306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.173448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.173477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.173695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.173727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.173842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.173874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.174037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.174085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.174221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.174251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.174401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.174429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.174513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.174540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.174661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.174688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.174773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.174800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.174923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.174950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.175088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.175116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.898 [2024-11-20 00:00:23.175229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.898 [2024-11-20 00:00:23.175256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.898 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.175350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.175404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.175548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.175598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.175755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.175785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.175918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.175949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.176093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.176121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.176237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.176264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.176395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.176425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.176629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.176658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.176792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.176823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.176934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.176964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.177098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.177140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.177271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.177300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.177400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.177427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.177548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.177575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.177690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.177717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.177834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.177862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.177963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.177991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.178110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.178137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.178243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.178270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.178419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.178447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.178555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.178582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.178703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.178730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.178854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.178883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.178982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.179009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.179116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.179144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.179235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.899 [2024-11-20 00:00:23.179262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.899 qpair failed and we were unable to recover it. 00:35:48.899 [2024-11-20 00:00:23.179395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.179425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.179563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.179593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.179716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.179746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.179883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.179918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.180086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.180114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.180266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.180293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.180440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.180470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.180607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.180637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.180750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.180793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.180915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.180942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.181064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.181099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.181231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.181258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.181344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.181372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.181495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.181522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.181679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.181709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.181819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.181849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.181966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.181994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.182108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.182136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.182236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.182264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.182471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.182501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.182707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.182736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.182892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.182923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.183021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.183052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.183214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.183241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.183336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.183392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.183583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.183610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.183699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.183746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.183908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.900 [2024-11-20 00:00:23.183939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:48.900 qpair failed and we were unable to recover it. 00:35:48.900 [2024-11-20 00:00:23.184098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.183 [2024-11-20 00:00:23.184132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.183 qpair failed and we were unable to recover it. 00:35:49.183 [2024-11-20 00:00:23.184230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.183 [2024-11-20 00:00:23.184258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.183 qpair failed and we were unable to recover it. 00:35:49.183 [2024-11-20 00:00:23.184385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.183 [2024-11-20 00:00:23.184413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.183 qpair failed and we were unable to recover it. 00:35:49.183 [2024-11-20 00:00:23.184584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.183 [2024-11-20 00:00:23.184611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.183 qpair failed and we were unable to recover it. 00:35:49.183 [2024-11-20 00:00:23.184762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.183 [2024-11-20 00:00:23.184807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.183 qpair failed and we were unable to recover it. 00:35:49.183 [2024-11-20 00:00:23.184998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.183 [2024-11-20 00:00:23.185043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.183 qpair failed and we were unable to recover it. 00:35:49.183 [2024-11-20 00:00:23.185189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.183 [2024-11-20 00:00:23.185216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.183 qpair failed and we were unable to recover it. 00:35:49.183 [2024-11-20 00:00:23.185312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.183 [2024-11-20 00:00:23.185339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.183 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.185492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.185523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.185694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.185724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.185855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.185885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.185987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.186019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.186151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.186178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.186305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.186335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.186471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.186500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.186600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.186635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.186765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.186796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.186957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.186984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.187088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.187116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.187205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.187232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.187323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.187367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.187577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.187607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.187739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.187783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.187935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.187967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.188077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.188107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.188210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.188237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.188337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.188376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.188465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.188510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.188603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.188634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.188785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.188829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.188989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.189016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.189118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.189145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.189237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.189264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.189378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.189409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.189566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.189597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.189729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.189760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.189879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.189923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.190044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.190086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.190214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.190246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.190354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.190390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.190493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.184 [2024-11-20 00:00:23.190523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.184 qpair failed and we were unable to recover it. 00:35:49.184 [2024-11-20 00:00:23.190653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.190683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.190803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.190843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.190973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.191002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.191106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.191135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.191219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.191247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.191337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.191374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.191491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.191518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.191638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.191667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.191759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.191786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.191937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.191966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.192061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.192095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.192185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.192213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.192308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.192336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.192451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.192481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.192606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.192640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.192773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.192803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.192901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.192930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.193034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.193083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.193201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.193231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.193407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.193455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.193589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.193634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.193718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.193746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.193903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.193943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.194076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.194122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.194234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.194263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.194414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.194444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.194548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.194578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.194726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.194776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.194871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.194914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.195010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.195037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.195166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.195197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.195288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.185 [2024-11-20 00:00:23.195333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.185 qpair failed and we were unable to recover it. 00:35:49.185 [2024-11-20 00:00:23.195482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa970 is same with the state(6) to be set 00:35:49.186 [2024-11-20 00:00:23.195623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.195667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.195782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.195813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.195909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.195938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.196063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.196115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.196232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.196260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.196344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.196395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.196533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.196561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.196680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.196706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.196792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.196819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.196903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.196930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.197018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.197045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.197174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.197204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.197301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.197331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.197514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.197543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.197650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.197681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.197789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.197820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.197935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.197965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.198095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.198125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.198235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.198282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.198424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.198471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.198560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.198588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.198684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.198711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.198802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.198836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.198941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.198969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.199050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.199084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.199232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.199258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.199350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.199377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.199474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.199500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.199593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.199624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.199782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.199809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.186 [2024-11-20 00:00:23.199895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.186 [2024-11-20 00:00:23.199922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.186 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.200014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.200040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.200153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.200181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.200278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.200306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.200421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.200453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.200562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.200593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.200706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.200737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.200834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.200864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.200998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.201028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.201160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.201200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.201304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.201333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.201457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.201485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.201619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.201649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.201776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.201807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.201934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.201964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.202115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.202155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.202256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.202284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.202420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.202450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.202575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.202605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.202780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.202838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.202939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.202969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.203075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.203103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.203200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.203227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.203317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.203344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.203496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.203539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.203681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.203713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.203907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.203973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.204124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.204152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.204250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.204277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.204409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.204440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.204546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.204575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.204707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.204760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.187 [2024-11-20 00:00:23.204934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.187 [2024-11-20 00:00:23.204993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.187 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.205139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.205170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.205291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.205339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.205515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.205564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.205685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.205737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.205862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.205889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.206008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.206036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.206168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.206208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.206333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.206378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.206485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.206515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.206634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.206686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.206794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.206826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.206973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.207000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.207112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.207141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.207240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.207267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.207412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.207472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.207640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.207672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.207792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.207842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.207940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.207970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.208096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.208140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.208233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.208260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.208342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.208373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.208503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.208532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.208640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.208667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.208845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.208877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.208980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.209012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.209150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.209190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.209288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.209323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.209462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.188 [2024-11-20 00:00:23.209507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.188 qpair failed and we were unable to recover it. 00:35:49.188 [2024-11-20 00:00:23.209648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.209693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.209842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.209901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.210026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.210055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.210177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.210205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.210308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.210338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.210436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.210466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.210593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.210623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.210784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.210831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.210924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.210952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.211052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.211096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.211228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.211258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.211361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.211392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.211504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.211534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.211671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.211701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.211849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.211893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.212057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.212113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.212223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.212251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.212372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.212418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.212561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.212588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.212748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.212779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.212875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.212906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.213076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.213137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.213274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.213301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.189 [2024-11-20 00:00:23.213412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.189 [2024-11-20 00:00:23.213451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.189 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.213556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.213584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.213739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.213772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.213876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.213904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.214107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.214135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.214245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.214278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.214389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.214416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.214545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.214589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.214674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.214702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.214795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.214822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.214916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.214944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.215059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.215092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.215179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.215206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.215310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.215337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.215433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.215459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.215595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.215644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.215765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.215793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.215906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.215947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.216050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.216089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.216209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.216253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.216388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.216419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.216531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.216561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.216702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.216733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.216903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.216931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.217037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.217090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.217186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.217215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.217308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.217337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.217487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.217516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.217628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.217659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.217820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.217850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.217957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.190 [2024-11-20 00:00:23.217984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.190 qpair failed and we were unable to recover it. 00:35:49.190 [2024-11-20 00:00:23.218117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.218148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.218276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.218323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.218465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.218513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.218665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.218713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.218803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.218832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.218955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.218987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.219113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.219142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.219259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.219306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.219450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.219499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.219684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.219718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.219864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.219893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.220016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.220044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.220162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.220189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.220297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.220327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.220510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.220557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.220678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.220725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.220823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.220852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.220957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.220988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.221103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.221149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.221267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.221297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.221440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.221470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.221570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.221602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.221745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.221794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.221904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.221935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.222067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.222102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.222237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.222265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.222365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.222419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.222587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.222636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.222751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.222785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.222916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.222947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.223080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.191 [2024-11-20 00:00:23.223129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.191 qpair failed and we were unable to recover it. 00:35:49.191 [2024-11-20 00:00:23.223233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.223262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.223412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.223458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.223604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.223661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.223829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.223878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.223980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.224007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.224115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.224143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.224257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.224283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.224403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.224437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.224554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.224580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.224698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.224725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.224819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.224845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.224965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.224992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.225102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.225140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.225233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.225260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.225408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.225437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.225530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.225560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.225765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.225795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.225912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.225941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.226049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.226108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.226199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.226226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.226314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.226341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.226502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.226531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.226668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.226707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.226873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.226910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.227066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.227115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.227218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.227248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.227361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.227399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.227523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.227551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.227646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.227674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.227779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.227807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.192 qpair failed and we were unable to recover it. 00:35:49.192 [2024-11-20 00:00:23.227924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.192 [2024-11-20 00:00:23.227952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.228060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.228106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.228217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.228246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.228342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.228373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.228470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.228500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.228605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.228638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.228784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.228810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.228933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.228959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.229048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.229093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.229188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.229215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.229323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.229353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.229462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.229491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.229646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.229675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.229787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.229835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.229964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.230004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.230113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.230142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.230234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.230280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.230443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.230498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.230611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.230642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.230803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.230853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.230990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.231017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.231128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.231169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.231263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.231293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.231455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.231485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.231619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.231650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.231800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.231852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.231974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.232004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.232155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.232187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.232290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.232320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.232446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.232475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.232608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.232637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.232739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.232773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.232930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.232959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.233085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.193 [2024-11-20 00:00:23.233114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.193 qpair failed and we were unable to recover it. 00:35:49.193 [2024-11-20 00:00:23.233199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.233227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.233325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.233353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.233485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.233518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.233644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.233671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.233770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.233798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.233896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.233924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.234007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.234034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.234157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.234197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.234303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.234331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.234476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.234515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.234618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.234647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.234768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.234806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.234933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.234960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.235061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.235097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.235186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.235235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.235332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.235363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.235514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.235545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.235676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.235708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.235886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.235919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.236064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.236097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.236208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.236253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.236366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.236396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.236557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.236587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.236785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.236839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.237728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.237764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.237927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.194 [2024-11-20 00:00:23.237955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.194 qpair failed and we were unable to recover it. 00:35:49.194 [2024-11-20 00:00:23.238084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.238113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.238216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.238243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.238346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.238382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.238521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.238553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.238681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.238718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.238836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.238866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.238967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.238997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.239124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.239152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.239244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.239271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.239407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.239447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.239565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.239608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.239747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.239783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.239912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.239942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.240053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.240113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.240217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.240245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.240343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.240397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.240557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.240588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.240703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.240731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.240855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.240883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.241014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.241042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.241250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.241277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.241494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.241524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.242305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.242338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.242578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.242627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.243646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.243687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.243842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.243874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.243995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.244024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.244171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.244199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.244325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.244353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.244513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.244540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.244734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.244761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.244854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.244893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.245024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.245053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.245173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.195 [2024-11-20 00:00:23.245201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.195 qpair failed and we were unable to recover it. 00:35:49.195 [2024-11-20 00:00:23.245302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.245329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.245461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.245491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.245656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.245686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.245849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.245876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.245973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.246000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.246120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.246148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.246268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.246295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.246443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.246472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.246586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.246631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.246760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.246790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.246917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.246944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.247087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.247127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.247263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.247295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.247444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.247484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.247635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.247666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.247771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.247801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.247940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.247969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.248100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.248128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.248253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.248283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.248444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.248495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.248607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.248635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.248813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.248862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.248958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.248988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.249134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.249162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.249255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.249283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.249377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.249404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.249611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.249641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.249748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.249779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.249886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.249919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.250089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.250118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.250212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.250239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.250372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.250402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.250540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.250569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.250714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.250766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.250865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.250896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.251022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.251082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.251194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.251223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.251341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.196 [2024-11-20 00:00:23.251381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.196 qpair failed and we were unable to recover it. 00:35:49.196 [2024-11-20 00:00:23.251478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.251506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.251707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.251756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.251850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.251878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.252005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.252034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.252158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.252187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.252337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.252370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.252524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.252580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.252696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.252731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.252887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.252915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.253026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.253053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.253184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.253213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.253313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.253356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.253523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.253554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.253659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.253689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.253828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.253860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.253991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.254031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.254155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.254195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.254293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.254321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.254462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.254511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.254733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.254768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.254902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.254934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.255043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.255084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.255213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.255253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.255358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.255387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.255569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.255618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.255736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.255784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.255916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.255943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.256074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.256102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.256189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.256216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.256318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.256348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.256478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.256513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.256690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.256737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.256871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.256900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.257033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.257117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.257219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.257246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.257343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.257377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.257498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.257524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.257716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.197 [2024-11-20 00:00:23.257742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.197 qpair failed and we were unable to recover it. 00:35:49.197 [2024-11-20 00:00:23.257865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.257891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.258029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.258098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.258213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.258252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.258402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.258434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.258530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.258560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.258695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.258724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.258819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.258850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.259022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.259053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.259184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.259212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.259315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.259342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.259457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.259489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.259646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.259675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.259834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.259864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.259981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.260009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.260117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.260144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.260234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.260261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.260349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.260384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.260496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.260523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.260617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.260644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.260737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.260767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.260926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.260956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.261050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.261096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.261198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.261225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.261316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.261343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.261467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.261494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.261600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.261629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.261721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.261763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.261921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.261965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.262086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.262116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.262206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.262235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.262334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.262373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.262518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.262549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.262681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.262727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.262855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.262886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.262994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.263021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.263128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.263162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.263254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.263281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.198 qpair failed and we were unable to recover it. 00:35:49.198 [2024-11-20 00:00:23.263431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.198 [2024-11-20 00:00:23.263461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.263558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.263588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.263706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.263733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.263913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.263970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.264076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.264107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.264226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.264257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.264429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.264472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.264582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.264626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.264736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.264781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.264880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.264907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.265046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.265106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.265207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.265236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.265327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.265354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.265463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.265492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.265652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.265700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.265860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.265914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.266055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.266094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.266204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.266251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.266344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.266376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.266519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.266565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.266678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.266712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.266851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.266880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.266976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.267003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.267104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.267132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.267226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.267252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.267344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.267381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.267474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.267500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.267638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.267670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.267840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.267885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.268034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.268090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.268216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.268244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.199 [2024-11-20 00:00:23.268351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.199 [2024-11-20 00:00:23.268390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.199 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.268523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.268553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.268714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.268745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.268837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.268879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.268974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.269002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.269100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.269128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.269241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.269270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.269395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.269429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.269571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.269602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.269700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.269732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.269862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.269892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.270018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.270049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.270181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.270210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.270321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.270352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.270489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.270517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.270643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.270670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.270820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.270871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.270981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.271014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.271138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.271166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.271276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.271306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.271456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.271486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.271644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.271696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.271848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.271896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.272041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.272079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.272185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.272229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.272334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.272367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.272464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.272493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.272587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.272616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.272748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.272777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.272905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.272935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.273048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.273084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.273180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.273207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.273288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.273314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.273446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.273475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.273646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.273680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.273817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.273848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.273940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.273980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.274110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.274157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.200 [2024-11-20 00:00:23.274253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.200 [2024-11-20 00:00:23.274281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.200 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.274418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.274448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.274622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.274649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.274830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.274860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.274984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.275014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.275151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.275179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.275280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.275320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.275507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.275540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.275700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.275750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.275852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.275881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.276034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.276061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.276171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.276200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.276299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.276327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.276531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.276560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.276688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.276720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.276814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.276844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.276947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.276977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.277108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.277165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.277276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.277306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.277424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.277455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.277579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.277609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.277732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.277762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.277963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.278021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.278129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.278164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.278253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.278281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.278459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.278503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.278640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.278685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.278799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.278826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.278950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.278978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.279121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.279152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.279292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.279322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.279475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.279503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.279656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.279683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.279782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.279809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.279903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.279932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.280036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.280089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.280200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.280229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.280345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.201 [2024-11-20 00:00:23.280385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.201 qpair failed and we were unable to recover it. 00:35:49.201 [2024-11-20 00:00:23.280514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.280562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.280670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.280705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.280876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.280907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.281018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.281046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.281173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.281217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.281322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.281354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.281489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.281522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.281617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.281659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.281808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.281857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.281947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.281977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.282183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.282212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.282299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.282326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.282493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.282521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.282696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.282741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.282864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.282893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.283023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.283054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.283168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.283195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.283293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.283320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.283418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.283445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.283616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.283646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.283789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.283833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.283959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.283989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.284122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.284151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.284347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.284404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.284569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.284599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.284801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.284837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.284969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.285000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.285106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.285151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.285237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.202 [2024-11-20 00:00:23.285264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.202 qpair failed and we were unable to recover it. 00:35:49.202 [2024-11-20 00:00:23.285386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.285413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.285545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.285576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.285757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.285787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.285892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.285925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.286086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.286131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.286229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.286256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.286373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.286399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.286509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.286539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.286650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.286678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.286824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.286853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.286969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.286998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.287130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.287171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.287272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.287301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.287446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.287490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.287622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.287653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.287842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.287873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.287994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.288024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.288158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.288187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.288306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.288347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.288505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.288552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.288687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.288731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.288821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.288849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.288947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.288974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.289085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.289115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.289237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.289264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.289362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.289408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.289539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.289570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.289708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.289757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.289961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.289989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.290091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.290120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.290266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.290293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.290445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.290475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.203 [2024-11-20 00:00:23.290627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.203 [2024-11-20 00:00:23.290657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.203 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.290813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.290843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.290978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.291021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.291155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.291199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.291334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.291371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.291500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.291541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.291669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.291699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.291809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.291852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.292030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.292056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.292264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.292292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.292436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.292467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.292598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.292629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.292764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.292794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.292925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.292956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.293117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.293146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.293292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.293319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.293453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.293483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.293584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.293615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.293752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.293782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.293874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.293904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.294052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.294089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.294216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.294244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.294335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.294375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.204 qpair failed and we were unable to recover it. 00:35:49.204 [2024-11-20 00:00:23.294538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.204 [2024-11-20 00:00:23.294566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.294716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.294746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.294872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.294902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.295030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.295075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.295220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.295247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.295356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.295386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.295512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.295543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.295670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.295700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.295908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.295939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.296116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.296144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.296258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.296285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.296375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.296402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.296538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.296569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.296702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.296747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.296875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.296904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.297019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.297047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.297211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.297238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.297379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.297408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.297552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.297580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.297783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.297813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.297922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.297965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.298051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.298098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.298247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.298275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.298451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.298483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.298613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.298643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.298755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.298799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.298937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.205 [2024-11-20 00:00:23.298973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.205 qpair failed and we were unable to recover it. 00:35:49.205 [2024-11-20 00:00:23.299152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.299180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.299295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.299322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.299434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.299461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.299558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.299586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.299723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.299752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.299846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.299877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.300034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.300080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.300228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.300268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.300433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.300477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.300580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.300624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.300773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.300804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.300961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.300991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.301106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.301152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.301298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.301326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.301446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.301473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.301663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.301709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.301841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.301871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.302012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.302041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.302209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.302237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.302328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.302374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.302542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.302573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.302735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.302766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.302878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.302910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.303091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.303119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.303211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.303238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.303329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.303376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.303491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.303519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.303662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.303693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.303851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.303882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.206 qpair failed and we were unable to recover it. 00:35:49.206 [2024-11-20 00:00:23.304001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.206 [2024-11-20 00:00:23.304028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.304141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.304170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.304295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.304323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.304441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.304468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.304592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.304636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.304771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.304807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.304965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.304995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.305126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.305153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.305299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.305326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.305459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.305486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.305637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.305665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.305817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.305847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.305990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.306018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.306146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.306171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.306267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.306295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.306424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.306450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.306572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.306615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.306710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.306739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.306830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.306860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.306972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.307001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.307151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.307179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.307272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.307299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.307432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.307459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.307645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.307672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.307822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.307852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.307975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.308005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.308152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.308193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.308313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.308342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.308465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.308493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.308658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.207 [2024-11-20 00:00:23.308689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.207 qpair failed and we were unable to recover it. 00:35:49.207 [2024-11-20 00:00:23.308824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.308856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.309021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.309051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.309204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.309232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.309346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.309398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.309547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.309593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.309731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.309775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.309894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.309921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.310058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.310111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.310251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.310297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.310416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.310443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.310561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.310587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.310683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.310711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.310856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.310883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.311000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.311027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.311174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.311215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.311372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.311406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.311520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.311550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.311719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.311749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.311885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.311915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.312047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.312095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.312248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.312279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.312400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.312430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.312541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.312572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.312755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.312802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.312919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.312946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.313074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.313102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.313193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.313221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.313322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.313352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.313534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.208 [2024-11-20 00:00:23.313579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.208 qpair failed and we were unable to recover it. 00:35:49.208 [2024-11-20 00:00:23.313731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.313764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.313922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.313952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.314090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.314122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.314261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.314291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.314448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.314481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.314611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.314641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.314765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.314795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.314948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.314978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.315153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.315181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.315342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.315386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.315517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.315548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.315673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.315703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.315810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.315837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.315951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.315982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.316127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.316156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.316300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.316331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.316501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.316554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.316720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.316750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.316890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.316917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.317034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.317086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.317249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.317277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.317413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.317461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.317565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.317595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.317749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.317795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.317939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.317966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.318121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.318149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.318269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.318302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.318420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.318447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.318561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.318588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.318684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.318712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.318856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.318883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.319028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.319065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.209 qpair failed and we were unable to recover it. 00:35:49.209 [2024-11-20 00:00:23.319181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.209 [2024-11-20 00:00:23.319213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.319354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.319384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.319494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.319524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.319676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.319717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.319808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.319837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.319944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.319983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.320114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.320146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.320276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.320303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.320432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.320459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.320609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.320637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.320734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.320763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.320887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.320915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.321087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.321136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.321254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.321282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.321428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.321458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.321600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.321642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.321833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.321863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.321994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.322025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.322197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.322226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.322353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.322394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.322520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.322574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.322693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.210 [2024-11-20 00:00:23.322724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.210 qpair failed and we were unable to recover it. 00:35:49.210 [2024-11-20 00:00:23.322908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.322954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.323076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.323105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.323195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.323222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.323357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.323387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.323541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.323586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.323701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.323746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.323868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.323896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.323982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.324009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.324177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.324218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.324360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.324389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.324514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.324543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.324666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.324693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.324892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.324948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.325114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.325142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.325307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.325350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.325515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.325558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.325808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.325857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.325980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.326007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.326120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.326151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.326322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.326377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.326564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.326616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.326736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.326771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.326925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.326952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.327117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.327157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.327273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.327313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.327531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.327562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.327697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.327728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.327962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.328021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.328146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.328175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.211 [2024-11-20 00:00:23.328288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.211 [2024-11-20 00:00:23.328315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.211 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.328538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.328597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.328792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.328849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.328957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.328988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.329132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.329159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.329310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.329337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.329503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.329538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.329699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.329746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.329877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.329907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.330027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.330074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.330191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.330223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.330358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.330388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.330529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.330559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.330716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.330746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.330846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.330876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.331034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.331078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.331193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.331220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.331332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.331368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.331480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.331507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.331645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.331675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.331794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.331837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.331988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.332028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.332182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.332223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.332387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.332444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.332590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.332637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.332772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.332818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.332914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.332942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.333090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.212 [2024-11-20 00:00:23.333122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.212 qpair failed and we were unable to recover it. 00:35:49.212 [2024-11-20 00:00:23.333277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.333307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.333421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.333447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.333592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.333622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.333783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.333837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.333939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.333970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.334130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.334182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.334320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.334365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.334504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.334553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.334701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.334749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.334876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.334908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.335033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.335076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.335194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.335220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.335340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.335375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.335544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.335574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.335684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.335716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.335878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.335927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.336013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.336040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.336174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.336219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.336358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.336403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.336540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.336583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.336751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.336808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.336932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.336961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.337111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.337156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.337302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.337334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.337525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.337556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.337683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.337732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.337898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.337928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.338044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.338078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.338166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.338194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.338284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.338313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.338425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.338456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.338620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.338650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.213 [2024-11-20 00:00:23.338783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.213 [2024-11-20 00:00:23.338813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.213 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.338919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.338947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.339056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.339105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.339222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.339255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.339415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.339475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.339596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.339648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.339788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.339819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.339954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.339983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.340088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.340116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.340237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.340265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.340377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.340407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.340560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.340590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.340697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.340727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.340828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.340859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.340975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.341002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.341149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.341178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.341301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.341348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.341492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.341541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.341641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.341679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.341812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.341851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.341984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.342011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.342133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.342162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.342290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.342321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.342427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.342458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.342550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.342581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.342718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.342749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.342854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.342885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.342995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.343040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.343206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.214 [2024-11-20 00:00:23.343235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.214 qpair failed and we were unable to recover it. 00:35:49.214 [2024-11-20 00:00:23.343375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.343405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.343511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.343555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.343702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.343733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.343828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.343859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.343985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.344026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.344180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.344221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.344376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.344434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.344589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.344642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.344790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.344837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.344945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.344974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.345127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.345156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.345251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.345280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.345381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.345408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.345530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.345557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.345702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.345729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.345816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.345844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.345953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.345979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.346080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.346109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.346199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.346226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.346370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.346398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.346509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.346555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.346665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.346693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.346834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.346874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.347004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.347034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.347142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.347182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.347322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.347350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.347475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.347502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.347652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.347679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.347778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.347807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.347936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.215 [2024-11-20 00:00:23.347964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.215 qpair failed and we were unable to recover it. 00:35:49.215 [2024-11-20 00:00:23.348052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.348086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.348226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.348272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.348397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.348442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.348586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.348630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.348747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.348776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.348938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.348979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.349119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.349150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.349257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.349285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.349404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.349431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.349575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.349629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.349722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.349751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.349862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.349889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.349977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.350005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.350097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.350142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.350276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.350306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.350422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.350455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.350574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.350604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.350769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.350816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.350926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.350966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.351102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.351132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.351243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.351274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.351409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.351440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.216 [2024-11-20 00:00:23.351562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.216 [2024-11-20 00:00:23.351612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.216 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.351746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.351777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.351884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.351914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.352033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.352066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.352253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.352298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.352449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.352493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.352633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.352679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.352821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.352865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.352960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.352987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.353100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.353145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.353275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.353305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.353479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.353509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.353660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.353713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.353825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.353852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.354031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.354058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.354182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.354209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.354325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.354379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.354580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.354610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.354714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.354744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.354889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.354918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.355036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.355062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.355211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.355238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.355382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.355411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.355544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.355588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.355734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.355764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.355941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.355986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.356117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.356148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.356273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.356301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.356419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.356451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.356582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.356628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.356770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.217 [2024-11-20 00:00:23.356817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.217 qpair failed and we were unable to recover it. 00:35:49.217 [2024-11-20 00:00:23.356928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.356958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.357063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.357100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.357229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.357257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.357346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.357389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.357529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.357556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.357693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.357728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.357860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.357889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.358020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.358048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.358182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.358209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.358295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.358321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.358483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.358510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.358618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.358649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.358763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.358793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.358921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.358952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.359079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.359109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.359217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.359250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.359340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.359368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.359492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.359518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.359626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.359669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.359816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.359845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.359940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.359970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.360102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.360130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.360247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.360273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.360392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.360434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.360573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.360602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.360765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.360795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.360941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.360971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.361127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.361155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.361244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.361272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.361433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.361491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.218 [2024-11-20 00:00:23.361613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.218 [2024-11-20 00:00:23.361646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.218 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.361803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.361834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.361964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.361995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.362106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.362152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.362266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.362293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.362465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.362498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.362650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.362698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.362888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.362917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.363015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.363045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.363194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.363221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.363337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.363375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.363527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.363561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.363729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.363777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.363935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.363965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.364115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.364142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.364229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.364261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.364393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.364423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.364541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.364572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.364750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.364799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.364934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.364966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.365130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.365159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.365260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.365288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.365412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.365440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.365530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.365563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.365757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.365788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.365912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.365939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.366088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.366129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.366224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.366253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.219 [2024-11-20 00:00:23.366397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.219 [2024-11-20 00:00:23.366425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.219 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.366508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.366536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.366713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.366770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.366882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.366917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.367050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.367097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.367230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.367258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.367407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.367438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.367589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.367640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.367821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.367870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.368013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.368040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.368181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.368209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.368337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.368383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.368500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.368528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.368652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.368678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.368791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.368818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.368912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.368939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.369062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.369097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.369210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.369237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.369358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.369385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.369481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.369510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.369631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.369660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.369827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.369854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.369981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.370013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.370144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.370172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.370270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.370297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.370412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.370450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.370576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.370603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.370688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.370715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.370836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.370863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.371036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.371092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.371240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.220 [2024-11-20 00:00:23.371270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.220 qpair failed and we were unable to recover it. 00:35:49.220 [2024-11-20 00:00:23.371395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.371438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.371587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.371618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.371756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.371783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.371925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.371953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.372111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.372157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.372278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.372306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.372435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.372482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.372609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.372639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.372806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.372833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.372929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.372956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.373103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.373131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.373214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.373241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.373367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.373394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.373535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.373567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.373740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.373768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.373890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.373935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.374089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.374136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.374258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.374287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.374397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.374442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.374547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.374576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.374725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.374753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.374847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.374882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.375000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.375028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.375128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.375158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.375284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.375313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.375454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.375484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.375651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.375678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.375802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.221 [2024-11-20 00:00:23.375848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.221 qpair failed and we were unable to recover it. 00:35:49.221 [2024-11-20 00:00:23.375986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.376028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.376124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.376152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.376277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.376305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.376409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.376460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.376611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.376640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.376756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.376784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.376922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.376952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.377079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.377108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.377234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.377261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.377413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.377448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.377628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.377655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.377773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.377817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.377935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.377980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.378102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.378131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.378229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.378259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.378407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.378437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.378544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.378572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.378693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.378721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.378824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.378855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.379026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.379053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.379160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.379188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.379332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.379359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.379444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.379471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.379605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.379632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.379904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.222 [2024-11-20 00:00:23.379961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.222 qpair failed and we were unable to recover it. 00:35:49.222 [2024-11-20 00:00:23.380111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.380143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.380258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.380285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.380418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.380448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.380601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.380636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.380721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.380748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.380873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.380910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.381040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.381077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.381170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.381198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.381318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.381345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.381442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.381470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.381614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.381642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.381784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.381815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.381983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.382010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.382125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.382153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.382275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.382302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.382401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.382430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.382586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.382630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.382865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.382930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.383098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.383127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.383257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.383286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.383409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.383439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.383533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.383577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.383705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.383733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.383849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.383879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.384010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.384038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.384176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.384217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.384410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.384439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.223 [2024-11-20 00:00:23.384554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.223 [2024-11-20 00:00:23.384584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.223 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.384702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.384729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.384902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.384931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.385020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.385047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.385176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.385205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.385330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.385375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.385511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.385538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.385663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.385690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.385832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.385862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.386000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.386027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.386134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.386162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.386310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.386338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.386499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.386526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.386613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.386640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.386793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.386820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.386944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.386971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.387134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.387162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.387284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.387312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.387443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.387470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.387605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.387632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.387739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.387772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.387882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.387911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.387998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.388026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.388150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.388179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.388271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.388298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.388425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.388452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.388547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.388575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.388710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.388751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.388938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.388986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.389113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.389143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.389293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.389331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.224 qpair failed and we were unable to recover it. 00:35:49.224 [2024-11-20 00:00:23.389440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.224 [2024-11-20 00:00:23.389485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.389649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.389681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.389883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.389941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.390036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.390084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.390195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.390223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.390344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.390371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.390493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.390520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.390754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.390813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.390942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.390973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.391083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.391147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.391269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.391299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.391450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.391496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.391698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.391764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.391862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.391890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.392022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.392050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.392184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.392212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.392339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.392396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.392575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.392640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.392845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.392873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.392995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.393022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.393170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.393199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.393294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.393321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.393497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.393542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.393747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.393779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.393915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.393947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.394100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.394128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.394245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.394272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.394397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.225 [2024-11-20 00:00:23.394440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.225 qpair failed and we were unable to recover it. 00:35:49.225 [2024-11-20 00:00:23.394691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.394743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.394906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.394938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.395089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.395118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.395237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.395264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.395381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.395428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.395639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.395692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.395834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.395865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.395972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.396000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.396105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.396132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.396253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.396280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.396371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.396397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.396484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.396528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.396680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.396710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.396826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.396860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.396977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.397005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.397099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.397128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.397272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.397299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.397404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.397445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.397610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.397639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.397754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.397783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.397911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.397941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.398037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.398067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.398216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.398243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.398377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.398405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.398530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.398565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.398771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.398800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.398922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.398952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.399064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.399116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.399237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.226 [2024-11-20 00:00:23.399264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.226 qpair failed and we were unable to recover it. 00:35:49.226 [2024-11-20 00:00:23.399356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.399391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.399509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.399541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.399673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.399703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.399870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.399900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.400060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.400107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.400250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.400282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.400431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.400487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.400647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.400679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.400781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.400811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.400957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.400989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.401141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.401170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.401317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.401375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.401470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.401499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.401643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.401698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.401800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.401828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.401945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.401985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.402132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.402165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.402309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.402339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.402488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.402539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.402675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.402729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.402822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.402854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.402995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.403035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.403183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.403228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.403332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.403372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.403475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.403508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.403625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.403652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.403748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.403777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.403865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.403892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.403981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.404014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.404111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.404139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.227 [2024-11-20 00:00:23.404238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.227 [2024-11-20 00:00:23.404264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.227 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.404351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.404381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.404529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.404563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.404662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.404692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.404824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.404855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.404991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.405019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.405146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.405175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.405290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.405320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.405466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.405498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.405619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.405650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.405742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.405772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.405898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.405930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.406031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.406060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.406192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.406220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.406333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.406362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.406559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.406618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.406736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.406798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.406917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.406945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.407035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.407064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.407170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.407198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.407316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.407343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.407472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.407504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.407619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.407657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.407774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.228 [2024-11-20 00:00:23.407800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.228 qpair failed and we were unable to recover it. 00:35:49.228 [2024-11-20 00:00:23.407886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.407913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.408059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.408108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.408269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.408298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.408464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.408495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.408604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.408631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.408824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.408854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.409026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.409054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.409157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.409185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.409275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.409304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.409482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.409536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.409758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.409814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.409918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.409948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.410064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.410099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.410248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.410277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.410429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.410466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.410734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.410789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.410897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.410927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.411054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.411089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.411228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.411272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.411502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.411533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.411771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.411832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.411974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.412002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.412093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.412122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.412228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.412255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.412360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.412405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.412656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.412708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.412908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.412970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.413082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.413111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.413207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.413235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.413383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.413427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.413544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.413595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.413763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.413830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.229 [2024-11-20 00:00:23.413969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.229 [2024-11-20 00:00:23.413999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.229 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.414129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.414157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.414290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.414320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.414419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.414449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.414586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.414649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.414839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.414937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.415063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.415096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.415220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.415247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.415385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.415432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.415663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.415715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.415965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.416016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.416184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.416213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.416326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.416370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.416471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.416501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.416624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.416655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.416747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.416777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.416907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.416937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.417101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.417142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.417334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.417379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.417502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.417546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.417687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.417718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.417919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.417960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.418080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.418107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.418216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.418243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.418338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.418367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.418494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.418527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.418681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.418722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.418822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.418851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.418988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.419035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.419165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.419197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.419345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.419394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.419565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.419620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.419761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.419807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.419938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.419974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.420062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.230 [2024-11-20 00:00:23.420103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.230 qpair failed and we were unable to recover it. 00:35:49.230 [2024-11-20 00:00:23.420202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.420229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.420376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.420402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.420501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.420527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.420658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.420685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.420800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.420827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.420960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.420987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.421119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.421159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.421254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.421283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.421424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.421461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.421572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.421615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.421749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.421779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.421944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.421974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.422066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.422107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.422198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.422225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.422396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.422447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.422632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.422697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.422871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.422932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.423089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.423118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.423255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.423283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.423409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.423439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.423572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.423608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.423785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.423849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.423981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.424010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.424133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.424161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.424265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.424292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.424438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.424468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.424581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.424624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.424757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.424794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.424916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.424947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.425089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.425116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.425245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.425272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.425397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.231 [2024-11-20 00:00:23.425427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.231 qpair failed and we were unable to recover it. 00:35:49.231 [2024-11-20 00:00:23.425540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.425586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.425754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.425784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.425888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.425931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.426110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.426138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.426229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.426256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.426381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.426416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.426565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.426629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.426774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.426824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.426918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.426947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.427053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.427092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.427255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.427300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.427443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.427487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.427619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.427665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.427757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.427784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.427932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.427960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.428055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.428094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.428187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.428214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.428352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.428401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.428529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.428586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.428703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.428737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.428892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.428920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.429064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.429119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.429253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.429284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.429476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.429544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.429691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.429719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.429811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.429850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.429970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.429998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.430132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.430163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.430311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.430366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.430541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.430573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.430721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.430782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.430916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.430946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.431064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.431101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.431207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.431234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.232 [2024-11-20 00:00:23.431389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.232 [2024-11-20 00:00:23.431433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.232 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.431570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.431616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.431723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.431768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.431849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.431876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.432002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.432029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.432137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.432166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.432271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.432311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.432410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.432439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.432584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.432612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.432753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.432785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.432907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.432938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.433084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.433113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.433212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.433247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.433333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.433370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.433524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.433551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.433663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.433708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.433835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.433866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.433959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.433987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.434104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.434153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.434283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.434313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.434428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.434464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.434606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.434647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.434759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.434805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.434912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.434939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.435088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.435116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.435249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.435294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.435461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.435516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.435712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.435770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.435986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.436016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.436152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.436183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.436291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.436321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.436455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.436484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.436733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.436785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.233 qpair failed and we were unable to recover it. 00:35:49.233 [2024-11-20 00:00:23.436922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.233 [2024-11-20 00:00:23.436949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.437099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.437139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.437239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.437268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.437439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.437485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.437632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.437663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.437796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.437825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.437949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.437977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.438104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.438133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.438270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.438310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.438511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.438567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.438681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.438708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.438911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.438961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.439109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.439155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.439256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.439284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.439419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.439446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.439543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.439570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.439674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.439703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.439822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.439850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.439974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.440007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.440115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.440149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.440271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.440298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.440417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.440446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.443086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.443124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.443240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.443269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.443432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.443463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.443594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.443623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.443822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.443891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.444023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.444052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.444180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.444220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.444331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.444387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.444552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.444585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.444690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.234 [2024-11-20 00:00:23.444717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.234 qpair failed and we were unable to recover it. 00:35:49.234 [2024-11-20 00:00:23.444906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.444962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.445120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.445149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.445246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.445273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.445390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.445417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.445559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.445588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.445727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.445757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.445878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.445908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.446036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.446087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.446209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.446237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.446354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.446388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.446530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.446574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.446744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.446803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.446956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.446987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.447082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.447111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.447247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.447298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.447466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.447540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.447734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.447765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.447894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.447925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.448061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.448094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.448185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.448211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.448330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.448358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.448461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.448492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.448596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.448625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.448750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.448779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.448907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.448936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.449089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.449135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.449261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.449289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.449371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.449398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.449552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.449579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.449755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.449800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.449996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.450026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.450155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.450183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.450334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.450361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.450554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.235 [2024-11-20 00:00:23.450585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.235 qpair failed and we were unable to recover it. 00:35:49.235 [2024-11-20 00:00:23.450746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.450809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.450941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.450971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.451083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.451111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.451224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.451251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.451344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.451389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.451585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.451653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.451777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.451806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.451902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.451937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.452120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.452161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.452289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.452318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.452467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.452514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.452652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.452697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.452849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.452877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.452995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.453023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.453133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.453161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.453306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.453333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.453450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.453476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.453616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.453646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.453802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.453832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.453987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.454017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.454136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.454183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.454327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.454371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.454628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.454684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.454873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.454925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.455053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.455095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.455256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.455297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.455409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.455442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.455608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.455663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.455846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.455903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.456044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.456077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.456170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.456199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.456287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.456314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.456443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.456473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.456570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.236 [2024-11-20 00:00:23.456599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.236 qpair failed and we were unable to recover it. 00:35:49.236 [2024-11-20 00:00:23.456731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.456801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.456962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.456992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.457129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.457157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.457307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.457336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.457519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.457550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.457681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.457711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.457814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.457844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.458010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.458041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.458180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.458221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.458355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.458384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.458521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.458551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.458689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.458719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.458849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.458893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.459055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.459096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.459192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.459219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.459384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.459414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.459533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.459577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.459700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.459729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.459870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.459903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.460044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.460079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.460172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.460200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.460336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.460381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.460555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.460628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.460753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.460814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.460919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.460949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.461085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.461126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.461238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.461279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.461413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.461459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.461591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.461621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.461817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.461847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.461946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.461977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.462133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.462174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.462328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.462357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.462477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.462522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.462646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.462675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.462838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.462867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.462970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.463000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.463168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.463195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.463320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.463348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.237 [2024-11-20 00:00:23.463444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.237 [2024-11-20 00:00:23.463471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.237 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.463587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.463617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.463772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.463803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.463928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.463958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.464104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.464131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.464220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.464248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.464336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.464364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.464502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.464531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.464641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.464687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.464817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.464848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.464985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.465016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.465166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.465194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.465312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.465340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.465497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.465527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.465662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.465693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.465854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.465885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.466020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.466065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.466212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.466253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.466400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.466448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.466585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.466631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.466783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.466831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.466950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.466978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.467097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.467126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.467262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.467290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.467397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.467428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.467531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.467558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.467711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.467762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.467887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.467917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.468048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.468096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.468261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.468290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.468424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.468476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.238 [2024-11-20 00:00:23.468643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.238 [2024-11-20 00:00:23.468689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.238 qpair failed and we were unable to recover it. 00:35:49.239 [2024-11-20 00:00:23.468797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.239 [2024-11-20 00:00:23.468842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.239 qpair failed and we were unable to recover it. 00:35:49.239 [2024-11-20 00:00:23.468986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.239 [2024-11-20 00:00:23.469027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.239 qpair failed and we were unable to recover it. 00:35:49.239 [2024-11-20 00:00:23.469163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.239 [2024-11-20 00:00:23.469192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.239 qpair failed and we were unable to recover it. 00:35:49.239 [2024-11-20 00:00:23.469308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.239 [2024-11-20 00:00:23.469335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.239 qpair failed and we were unable to recover it. 00:35:49.239 [2024-11-20 00:00:23.469544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.239 [2024-11-20 00:00:23.469602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.239 qpair failed and we were unable to recover it. 00:35:49.535 [2024-11-20 00:00:23.469757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.535 [2024-11-20 00:00:23.469784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.535 qpair failed and we were unable to recover it. 00:35:49.535 [2024-11-20 00:00:23.469922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.535 [2024-11-20 00:00:23.469953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.535 qpair failed and we were unable to recover it. 00:35:49.535 [2024-11-20 00:00:23.470090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.535 [2024-11-20 00:00:23.470117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.535 qpair failed and we were unable to recover it. 00:35:49.535 [2024-11-20 00:00:23.470223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.535 [2024-11-20 00:00:23.470251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.535 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.470341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.470386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.470591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.470639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.470791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.470837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.470962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.470996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.471146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.471192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.471304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.471335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.471501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.471544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.471661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.471705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.471799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.471826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.471971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.471998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.472091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.472118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.472211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.472260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.472392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.472422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.472578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.472608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.472712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.472741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.472878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.472908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.473047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.473082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.473176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.473203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.473338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.473384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.473469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.473498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.473623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.473716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.473815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.473849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.473946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.473974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.474091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.474154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.474293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.474326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.474505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.474539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.474700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.474732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.474869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.474899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.475042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.475078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.475223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.475269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.475470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.475498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.475667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.475712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.475874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.475915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.476020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.476050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.476212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.476241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.476377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.476404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.476532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.536 [2024-11-20 00:00:23.476560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.536 qpair failed and we were unable to recover it. 00:35:49.536 [2024-11-20 00:00:23.476682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.476710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.476824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.476854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.476952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.476981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.477154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.477201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.477347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.477380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.477532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.477560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.477714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.477759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.477850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.477879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.478011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.478039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.478150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.478181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.478291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.478321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.478425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.478456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.478684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.478747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.478917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.478945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.479093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.479121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.479257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.479303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.479471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.479518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.479660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.479706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.479818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.479847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.479985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.480014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.480183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.480215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.480314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.480344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.480479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.480509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.480637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.480667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.480795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.480826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.480923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.480950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.481102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.481142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.481322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.481366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.481522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.481555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.481684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.481715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.481845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.481875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.482024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.482083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.482255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.482287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.482505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.482563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.482782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.482812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.482943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.482974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.483097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.483126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.483295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.537 [2024-11-20 00:00:23.483341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.537 qpair failed and we were unable to recover it. 00:35:49.537 [2024-11-20 00:00:23.483444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.483490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.483653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.483703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.483853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.483880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.484024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.484052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.484207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.484239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.484355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.484385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.484518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.484547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.484686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.484717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.484863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.484921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.485052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.485091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.485232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.485261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.485384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.485431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.485562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.485593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.485777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.485807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.485933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.485966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.486064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.486102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.486210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.486257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.486400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.486430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.486533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.486563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.486697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.486728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.486915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.486945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.487076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.487105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.487236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.487265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.487390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.487434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.487568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.487599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.487793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.487823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.487970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.488001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.488162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.488191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.488389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.488416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.488527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.488568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.488692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.488721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.488843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.488873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.489081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.489111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.489252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.489279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.489443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.489474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.489706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.489762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.489900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.489930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.490074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.538 [2024-11-20 00:00:23.490125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.538 qpair failed and we were unable to recover it. 00:35:49.538 [2024-11-20 00:00:23.490229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.490259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.490377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.490425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.490560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.490607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.490727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.490755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.490899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.490928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.491017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.491046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.491178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.491205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.491305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.491332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.491479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.491506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.491673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.491714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.491812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.491842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.491937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.491966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.492093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.492122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.492265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.492310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.492426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.492476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.492632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.492659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.492807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.492835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.492988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.493016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.493162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.493194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.493330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.493360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.493567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.493597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.493880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.493931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.494045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.494083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.494209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.494237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.494424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.494475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.494634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.494664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.494807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.494837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.494969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.494996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.495095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.495121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.495241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.495268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.495480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.495561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.495696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.495725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.495861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.495891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.496080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.496121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.496260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.496288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.496415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.496460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.496644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.496715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.539 [2024-11-20 00:00:23.496928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.539 [2024-11-20 00:00:23.496959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.539 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.497082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.497110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.497238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.497265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.497359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.497386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.497508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.497536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.497723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.497788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.497988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.498018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.498165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.498206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.498366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.498401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.498490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.498520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.498636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.498663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.498774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.498821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.498986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.499018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.499128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.499156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.499239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.499283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.499385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.499415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.499577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.499618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.499790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.499843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.500016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.500043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.500180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.500207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.500330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.500376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.500517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.500560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.500664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.500695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.500828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.500861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.500994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.501026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.501166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.501194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.501290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.501317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.501486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.501517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.501643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.501675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.501882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.501916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.502080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.502126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.502248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.502275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.540 qpair failed and we were unable to recover it. 00:35:49.540 [2024-11-20 00:00:23.502371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.540 [2024-11-20 00:00:23.502398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.502511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.502540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.502741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.502770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.502926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.502956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.503092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.503137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.503234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.503262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.503353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.503380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.503486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.503529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.503700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.503730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.503888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.503918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.504046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.504083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.504227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.504254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.504397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.504427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.504592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.504621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.504818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.504878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.505035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.505065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.505174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.505202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.505344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.505390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.505556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.505602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.505772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.505820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.505915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.505943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.506067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.506100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.506210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.506237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.506403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.506433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.506570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.506601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.506806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.506835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.506962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.506991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.507142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.507171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.507310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.507360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.507533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.507593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.507777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.507826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.507948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.507975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.508178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.508206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.508339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.508379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.508536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.508572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.508710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.508741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.508918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.508946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.509046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.541 [2024-11-20 00:00:23.509083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.541 qpair failed and we were unable to recover it. 00:35:49.541 [2024-11-20 00:00:23.509204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.509232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.509356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.509402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.509575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.509620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.509835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.509867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.509993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.510024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.510169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.510197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.510306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.510336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.510465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.510496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.510652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.510683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.510815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.510846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.511007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.511037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.511185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.511213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.511331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.511358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.511465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.511492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.511614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.511644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.511759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.511804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.511924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.511951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.512102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.512130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.512227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.512254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.512347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.512375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.512460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.512504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.512647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.512693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.512790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.512821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.512934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.512966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.513106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.513134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.513240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.513281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.513446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.513491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.513617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.513662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.513821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.513852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.513952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.513982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.514097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.514125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.514220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.514247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.514356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.514387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.514498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.514525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.514683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.514715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.514863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.514908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.515082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.515110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.515245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.542 [2024-11-20 00:00:23.515272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.542 qpair failed and we were unable to recover it. 00:35:49.542 [2024-11-20 00:00:23.515387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.515434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.515577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.515608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.515767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.515797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.515932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.515963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.516106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.516134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.516225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.516252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.516419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.516450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.516675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.516728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.516842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.516874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.517000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.517030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.517163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.517192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.517290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.517318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.517457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.517489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.517667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.517747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.517919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.517949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.518066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.518103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.518222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.518264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.518391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.518421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.518516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.518545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.518723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.518754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.518900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.518928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.519049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.519086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.519189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.519217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.519360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.519387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.519584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.519612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.519799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.519858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.519989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.520033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.520160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.520188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.520336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.520366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.520462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.520494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.520592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.520622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.520729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.520798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.520949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.520979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.521101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.521128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.521253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.521281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.521447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.521478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.521575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.521606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.521751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.543 [2024-11-20 00:00:23.521824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.543 qpair failed and we were unable to recover it. 00:35:49.543 [2024-11-20 00:00:23.521960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.521987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.522105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.522132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.522248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.522275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.522368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.522396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.522544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.522588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.522689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.522718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.522835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.522877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.523011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.523040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.523155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.523183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.523269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.523314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.523407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.523437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.523571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.523602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.523731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.523762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.523917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.523959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.524082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.524116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.524230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.524271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.524426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.524477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.524618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.524663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.524771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.524818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.524907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.524936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.525057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.525098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.525232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.525261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.525405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.525437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.525571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.525601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.525782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.525839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.525949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.525978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.526107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.526136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.526220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.526252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.526365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.526396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.526535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.526566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.526698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.526728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.526853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.526883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.526993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.527020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.527143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.527171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.527262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.527289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.527382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.527410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.527557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.527588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.527724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.544 [2024-11-20 00:00:23.527754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.544 qpair failed and we were unable to recover it. 00:35:49.544 [2024-11-20 00:00:23.527857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.527888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.527984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.528018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.528184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.528226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.528413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.528458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.528606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.528652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.528848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.528897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.529051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.529087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.529197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.529224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.529308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.529335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.529469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.529513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.529648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.529679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.529787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.529817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.529907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.529937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.530067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.530120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.530242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.530269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.530377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.530418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.530537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.530585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.530757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.530802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.530894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.530921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.531057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.531112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.531249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.531300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.531405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.531451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.531574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.531601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.531747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.531775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.531901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.531930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.532040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.532091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.532226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.532266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.532366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.532396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.532545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.532574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.532685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.532715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.532861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.532892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.533057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.533096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.533210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.545 [2024-11-20 00:00:23.533238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.545 qpair failed and we were unable to recover it. 00:35:49.545 [2024-11-20 00:00:23.533345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.533375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.533518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.533571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.533691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.533721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.533832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.533862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.533989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.534016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.534127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.534159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.534289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.534320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.534439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.534484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.534616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.534665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.534766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.534793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.534889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.534923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.535016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.535045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.535174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.535203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.535319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.535347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.535493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.535522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.535656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.535686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.535831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.535859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.535956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.535984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.536102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.536131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.536241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.536286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.536430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.536476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.536565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.536593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.536750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.536777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.536889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.536932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.537036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.537065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.537227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.537259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.537372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.537402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.537581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.537637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.537829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.537893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.538004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.538033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.538175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.538219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.538362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.538406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.538544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.538590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.538687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.546 [2024-11-20 00:00:23.538713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.546 qpair failed and we were unable to recover it. 00:35:49.546 [2024-11-20 00:00:23.538800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.538827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.538951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.538979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.539073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.539101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.539258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.539302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.539406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.539438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.539542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.539573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.539683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.539749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.539880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.539910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.540030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.540092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.540243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.540273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.540441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.540486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.540655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.540687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.540800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.540827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.540951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.540980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.541080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.541109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.541211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.541239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.541347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.541383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.541498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.541565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.541709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.541763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.541903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.541931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.542029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.542056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.542224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.542265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.542389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.542422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.542515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.542546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.542654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.542681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.542832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.542864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.543000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.543030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.543142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.543170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.543316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.543343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.543483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.543513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.543729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.543788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.547 [2024-11-20 00:00:23.543945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.547 [2024-11-20 00:00:23.543975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.547 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.544117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.544144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.544233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.544260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.544348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.544392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.544554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.544612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.544767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.544797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.544924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.544957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.545105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.545165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.545296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.545363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.545524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.545582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.545691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.545759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.545854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.545881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.546034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.546067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.546167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.546210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.546343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.546373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.546653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.546705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.546876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.546930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.547036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.547067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.547199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.547229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.547411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.547464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.547626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.547681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.547784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.547813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.547938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.547968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.548099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.548140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.548234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.548263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.548381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.548422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.548588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.548622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.548783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.548814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.548975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.549007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.549139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.549167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.549292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.549319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.549411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.549438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.549591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.549635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.549749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.549794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.549952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.549982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.550085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.550131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.550245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.550273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.550387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.550414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.548 [2024-11-20 00:00:23.550554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.548 [2024-11-20 00:00:23.550584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.548 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.550702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.550750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.550874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.550904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.551028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.551057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.551203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.551230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.551332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.551372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.551533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.551563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.551690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.551737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.551867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.551898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.551993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.552023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.552184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.552225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.552335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.552367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.552569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.552599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.552709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.552738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.552955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.552985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.553096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.553140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.553242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.553271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.553446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.553509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.553701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.553757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.553902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.553933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.554050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.554087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.554212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.554239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.554334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.554361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.554554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.554611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.554743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.554786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.554928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.554956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.555156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.555184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.555335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.555361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.555463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.555497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.555751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.555801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.556002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.556032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.556214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.556242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.556338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.556382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.556569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.556633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.556743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.556769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.549 qpair failed and we were unable to recover it. 00:35:49.549 [2024-11-20 00:00:23.556924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.549 [2024-11-20 00:00:23.556969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.557130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.557170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.557288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.557328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.557500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.557566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.557675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.557744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.557858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.557885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.557985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.558014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.558127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.558159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.558292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.558321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.558430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.558461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.558620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.558680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.558879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.558937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.559080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.559108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.559196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.559224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.559344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.559371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.559530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.559560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.559732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.559795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.559915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.559961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.560142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.560172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.560270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.560299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.560435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.560463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.560605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.560636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.560732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.560763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.560867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.560898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.561047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.561102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.561255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.561295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.561418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.561449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.561602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.561631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.561755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.561785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.561942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.561991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.562092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.562121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.562243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.562270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.562453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.562509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.562706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.562768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.562905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.562935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.550 [2024-11-20 00:00:23.563039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.550 [2024-11-20 00:00:23.563077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.550 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.563212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.563239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.563356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.563383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.563590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.563645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.563831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.563890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.564002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.564033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.564206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.564233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.564357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.564383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.564483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.564511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.564695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.564738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.564869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.564899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.565030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.565060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.565190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.565230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.565402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.565434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.565533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.565563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.565673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.565717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.565905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.565935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.566081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.566126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.566266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.566295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.566506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.566561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.566741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.566793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.566922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.566953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.567101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.567142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.567294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.567340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.567571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.567623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.567825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.567861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.567988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.568016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.568140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.568169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.568302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.568332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.568456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.568485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.568609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.568637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.568734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.568762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.568921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.568962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.569130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.569171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.569440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.569472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.569578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.569608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.569790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.569817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.569951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.569978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.570079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.551 [2024-11-20 00:00:23.570107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.551 qpair failed and we were unable to recover it. 00:35:49.551 [2024-11-20 00:00:23.570228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.570254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.570393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.570423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.570559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.570588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.570724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.570755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.570911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.570961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.571055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.571090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.571185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.571212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.571390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.571420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.571602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.571648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.571815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.571861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.571966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.571994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.572120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.572148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.572271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.572297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.572430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.572468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.572632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.572696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.572828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.572857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.573003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.573031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.573137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.573166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.573305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.573350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.573442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.573469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.573678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.573748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.573866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.573893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.574008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.574037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.574166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.574194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.574381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.574411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.574577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.574607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.574736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.574765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.574894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.574921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.575044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.575078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.575175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.575201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.575302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.575329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.575456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.575501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.575632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.575661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.575819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.575848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.575974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.576001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.576091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.576118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.576239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.576268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.576382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.552 [2024-11-20 00:00:23.576411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.552 qpair failed and we were unable to recover it. 00:35:49.552 [2024-11-20 00:00:23.576535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.576563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.576697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.576726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.576869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.576917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.577093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.577124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.577224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.577251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.577371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.577406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.577490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.577516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.577608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.577651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.577757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.577786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.577948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.577977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.578120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.578147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.578292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.578317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.578442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.578467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.578607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.578637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.578763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.578791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.578896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.578926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.579032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.579076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.579204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.579231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.579349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.579382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.579553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.579592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.579746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.579774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.579963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.579992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.580132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.580176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.580270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.580295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.580460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.580490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.580596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.580625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.580732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.580761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.580946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.580972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.581098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.581125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.581241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.581271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.581405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.581432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.581543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.581572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.581672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.581700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.581842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.581871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.581976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.582004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.582122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.582149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.582260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.582286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.582405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.553 [2024-11-20 00:00:23.582457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.553 qpair failed and we were unable to recover it. 00:35:49.553 [2024-11-20 00:00:23.582594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.582622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.582741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.582783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.582953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.582982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.583131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.583158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.583277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.583302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.583430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.583473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.583646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.583692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.583904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.583929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.584014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.584041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.584163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.584189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.584335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.584361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.584482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.584513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.584608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.584637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.584769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.584798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.584927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.584956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.585066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.585100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.585204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.585229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.585318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.585344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.585509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.585537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.585703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.585731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.585837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.585866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.585987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.586014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.586147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.586173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.586260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.586286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.586427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.586455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.586578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.586621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.586760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.586789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.586918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.586946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.587062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.587094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.587222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.587248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.587385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.587414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.587557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.587601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.587776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.587805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.554 [2024-11-20 00:00:23.587931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.554 [2024-11-20 00:00:23.587959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.554 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.588120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.588147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.588235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.588261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.588382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.588408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.588498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.588524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.588671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.588698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.588871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.588900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.589056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.589099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.589240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.589265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.589362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.589388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.589501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.589527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.589647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.589674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.589795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.589820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.589916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.589942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.590063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.590096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.590256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.590300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.590444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.590472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.590625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.590670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.590773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.590817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.590941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.590968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.591063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.591101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.591245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.591290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.591408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.591434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.591610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.591638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.591836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.591866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.591973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.591998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.592124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.592155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.592252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.592278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.592422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.592448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.592535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.592560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.592735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.592763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.592874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.592901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.593022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.593048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.593178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.593207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.593295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.593323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.593444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.593471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.593642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.593669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.593787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.593814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-11-20 00:00:23.593936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.555 [2024-11-20 00:00:23.593965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.594125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.594152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.594287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.594313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.594448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.594491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.594701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.594730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.594864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.594889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.594985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.595011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.595113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.595142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.595239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.595266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.595356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.595386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.595527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.595554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.595677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.595704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.595869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.595899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.595991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.596020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.596184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.596211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.596316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.596344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.596466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.596493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.596618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.596644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.596771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.596815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.597000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.597040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.597178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.597206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.597300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.597326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.597422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.597450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.597546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.597571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.597694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.597721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.597854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.597879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.598033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.598059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.598171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.598198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.598318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.598344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.598473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.598499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.598615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.598641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.598756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.598785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.598933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.598963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.599113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.599141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.599254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.599280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.599369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.599397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.599521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.599549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.599693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.599736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-11-20 00:00:23.599857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.556 [2024-11-20 00:00:23.599882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.600003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.600028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.600203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.600230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.600352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.600378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.600499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.600524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.600669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.600700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.600819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.600845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.600947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.600975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.601063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.601100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.601220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.601247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.601347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.601374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.601491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.601520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.601664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.601689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.601779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.601805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.601951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.601980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.602116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.602144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.602268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.602294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.602432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.602459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.602611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.602638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.602761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.602805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.602901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.602934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.603106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.603134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.603266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.603293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.603441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.603473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.603646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.603672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.603764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.603791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.603939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.603969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.604109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.604136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.604271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.604298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.604422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.604452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.604594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.604620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.604723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.604749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.604870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.604896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.605009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.605035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.605133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.605160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.605280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.605316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.605397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.605423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.605517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.605544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.557 [2024-11-20 00:00:23.605644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.557 [2024-11-20 00:00:23.605673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.557 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.605822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.605848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.605996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.606022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.606169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.606199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.606294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.606322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.606449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.606477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.606633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.606666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.606782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.606810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.606931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.606958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.607154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.607183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.607330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.607356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.607465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.607508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.607638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.607667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.607813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.607839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.607960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.607986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.608172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.608199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.608318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.608345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.608448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.608474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.608629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.608655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.608749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.608775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.608903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.608929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.609086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.609113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.609234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.609260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.609378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.609404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.609560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.609590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.609733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.609760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.609848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.609875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.610006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.610035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.610190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.610216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.610338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.610390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.610525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.610554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.610663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.610689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.610837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.610865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.611005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.611034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.611162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.611189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.611331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.611357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.611503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.611548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.611695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.611724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.558 [2024-11-20 00:00:23.611856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.558 [2024-11-20 00:00:23.611885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.558 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.612061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.612101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.612268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.612295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.612428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.612458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.612577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.612605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.612723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.612749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.612875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.612901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.613045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.613080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.613215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.613240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.613415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.613445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.613598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.613627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.613737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.613771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.613900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.613927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.614040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.614075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.614240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.614266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.614467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.614496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.614655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.614685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.614820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.614846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.614971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.614997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.615115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.615159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.615302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.615336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.615437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.615462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.615551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.615576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.615697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.615722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.615845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.615872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.616041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.616083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.616219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.616245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.616369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.616396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.616510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.616536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.616653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.616679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.616774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.616799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.616915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.616945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.617106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.617132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.559 qpair failed and we were unable to recover it. 00:35:49.559 [2024-11-20 00:00:23.617224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.559 [2024-11-20 00:00:23.617250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.617395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.617422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.617538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.617563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.617797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.617826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.617928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.617971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.618091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.618118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.618207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.618232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.618349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.618375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.618467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.618493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.618636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.618663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.618748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.618774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.618890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.618916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.619038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.619064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.619196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.619221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.619343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.619369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.619480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.619506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.619661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.619689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.619842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.619869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.619994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.620020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.620146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.620174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.620301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.620327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.620447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.620473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.620579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.620608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.620723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.620749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.620841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.620866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.621006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.621035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.621156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.621181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.621302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.621328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.621470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.621498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.621639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.621665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.621754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.621784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.621943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.621972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.622117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.622144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.622277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.622303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.622431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.622457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.622548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.622575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.622672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.622698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.622830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.622858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.560 [2024-11-20 00:00:23.623023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.560 [2024-11-20 00:00:23.623049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.560 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.623163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.623189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.623313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.623338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.623473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.623499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.623588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.623613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.623733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.623759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.623863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.623902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.624043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.624089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.624227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.624252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.624372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.624398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.624494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.624519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.624657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.624701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.624852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.624877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.624998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.625042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.625196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.625224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.625346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.625372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.625471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.625497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.625647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.625675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.625827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.625854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.625954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.625984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.626129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.626155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.626297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.626323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.626430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.626455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.626583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.626610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.626705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.626731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.626822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.626848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.626939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.626965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.627060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.627091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.627197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.627223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.627344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.627369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.627489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.627515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.627632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.627657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.627803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.627842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.628005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.628037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.628205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.628234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.628356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.628383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.628535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.628562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.628708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.628735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.561 qpair failed and we were unable to recover it. 00:35:49.561 [2024-11-20 00:00:23.628877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.561 [2024-11-20 00:00:23.628907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.629013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.629038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.629190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.629217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.629306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.629333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.629460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.629485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.629605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.629632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.629740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.629768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.629937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.629964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.630065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.630105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.630243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.630269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.630363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.630390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.630514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.630540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.630633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.630659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.630742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.630767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.630893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.630919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.631052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.631109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.631235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.631263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.631390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.631426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.631513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.631541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.631629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.631656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.631745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.631772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.631894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.631921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.632042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.632076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.632192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.632218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.632365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.632390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.632520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.632545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.632670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.632696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.632860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.632888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.633028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.633054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.633201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.633227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.633358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.633388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.633504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.633529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.633649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.633675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.633811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.633838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.633981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.634007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.634128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.634159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.634259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.634288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.634413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.634440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.562 [2024-11-20 00:00:23.634561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.562 [2024-11-20 00:00:23.634588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.562 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.634747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.634784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.634923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.634950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.635109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.635137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.635233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.635260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.635354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.635379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.635502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.635528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.635695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.635724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.635862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.635887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.636010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.636036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.636172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.636199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.636294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.636322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.636476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.636505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.636634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.636663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.636770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.636797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.636893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.636920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.637056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.637089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.637208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.637234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.637396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.637424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.637564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.637589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.637733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.637759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.637927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.637955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.638050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.638084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.638221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.638248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.638409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.638442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.638580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.638607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.638749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.638775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.638870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.638896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.638992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.639019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.639129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.639155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.639285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.639311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.639449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.639476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.639622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.639647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.563 qpair failed and we were unable to recover it. 00:35:49.563 [2024-11-20 00:00:23.639764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.563 [2024-11-20 00:00:23.639791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.639938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.639963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.640107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.640133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.640256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.640283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.640428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.640453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.640618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.640644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.640747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.640772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.640946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.641001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.641139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.641169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.641323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.641351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.641491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.641521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.641641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.641669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.641766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.641793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.641912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.641939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.642081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.642125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.642247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.642281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.642390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.642434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.642529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.642554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.642678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.642709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.642817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.642846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.642951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.642976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.643095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.643121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.643215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.643241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.643322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.643349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.643495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.643521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.643664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.643697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.643816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.643844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.644002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.644028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.644209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.644237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.644354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.644388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.644485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.644512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.644633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.644660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.644814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.644840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.644937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.644977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.645125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.645154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.645264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.645290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.645389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.645415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.645581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.645612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.564 qpair failed and we were unable to recover it. 00:35:49.564 [2024-11-20 00:00:23.645776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.564 [2024-11-20 00:00:23.645803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.645929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.645972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.646079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.646110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.646251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.646279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.646400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.646427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.646553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.646580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.646701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.646726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.646815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.646844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.647004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.647030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.647150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.647176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.647275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.647301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.647394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.647419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.647562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.647588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.647731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.647759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.647890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.647918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.648026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.648052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.648224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.648250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.648387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.648415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.648580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.648606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.648768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.648796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.648930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.648961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.649085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.649113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.649242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.649270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.649368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.649395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.649511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.649539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.649686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.649732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.649877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.649905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.650028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.650054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.650188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.650213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.650332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.650359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.650449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.650474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.650593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.650619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.650713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.650739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.650861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.650887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.650974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.651004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.651163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.651191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.651285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.651311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.651401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.651428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.565 qpair failed and we were unable to recover it. 00:35:49.565 [2024-11-20 00:00:23.651556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.565 [2024-11-20 00:00:23.651586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.651697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.651724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.651811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.651838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.652005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.652035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.652147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.652175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.652272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.652300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.652386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.652414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.652565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.652592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.652757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.652786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.652932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.652975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.653125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.653153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.653276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.653303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.653454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.653482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.653626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.653652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.653775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.653801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.653943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.653972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.654130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.654157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.654278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.654304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.654427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.654455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.654595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.654621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.654744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.654770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.654877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.654906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.655043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.655074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.655194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.655225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.655336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.655377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.655509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.655540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.655673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.655719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.655847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.655877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.656043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.656082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.656187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.656216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.656352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.656382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.656491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.656517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.656634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.656662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.656806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.656832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.656951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.656977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.657100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.657127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.657248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.657277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.657404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.657431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.566 qpair failed and we were unable to recover it. 00:35:49.566 [2024-11-20 00:00:23.657551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.566 [2024-11-20 00:00:23.657577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.657683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.657712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.657846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.657873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.657963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.657990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.658121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.658148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.658269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.658295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.658409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.658435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.658608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.658640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.658762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.658789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.658905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.658933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.659039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.659077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.659228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.659255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.659424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.659460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.659591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.659621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.659760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.659787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.659913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.659939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.660035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.660062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.660163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.660190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.660283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.660309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.660427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.660453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.660543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.660569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.660697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.660723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.660835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.660864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.660997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.661023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.661124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.661151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.661276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.661302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.661422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.661448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.661533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.661560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.661677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.661709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.661830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.661857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.661991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.662019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.662147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.662174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.662269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.662297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.662420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.662447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.662620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.662650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.662774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.662816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.662972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.663002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.663104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.663148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.567 [2024-11-20 00:00:23.663244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.567 [2024-11-20 00:00:23.663270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.567 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.663431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.663479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.663582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.663626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.663711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.663737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.663832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.663859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.663949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.663975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.664123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.664150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.664248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.664274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.664419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.664445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.664561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.664587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.664678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.664703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.664848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.664874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.664992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.665018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.665133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.665161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.665254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.665282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.665407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.665434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.665550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.665577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.665752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.665782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.665944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.665971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.666067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.666103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.666253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.666281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.666425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.666451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.666539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.666566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.666735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.666764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.666877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.666921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.667048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.667082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.667196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.667224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.667344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.667371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.667475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.667508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.667674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.667700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.667795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.667821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.667944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.667971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.668067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.668110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.668241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.568 [2024-11-20 00:00:23.668269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.568 qpair failed and we were unable to recover it. 00:35:49.568 [2024-11-20 00:00:23.668391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.668418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.668538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.668565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.668708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.668735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.668857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.668902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.669024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.669055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.669203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.669231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.669319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.669345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.669514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.669540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.669641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.669666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.669788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.669814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.669928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.669953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.670066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.670098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.670249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.670275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.670378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.670406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.670542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.670568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.670653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.670679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.670831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.670857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.671009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.671054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.671172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.671197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.671348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.671374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.671522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.671547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.671641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.671670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.671786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.671818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.671967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.671994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.672085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.672113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.672230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.672257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.672348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.672376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.672471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.672498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.672634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.672663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.672798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.672826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.672954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.672981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.673098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.673144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.673241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.673267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.673389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.673415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.673547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.673576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.673722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.673748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.673843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.673868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.674040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.569 [2024-11-20 00:00:23.674074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.569 qpair failed and we were unable to recover it. 00:35:49.569 [2024-11-20 00:00:23.674221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.674248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.674392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.674422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.674554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.674583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.674749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.674776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.674897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.674939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.675067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.675105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.675248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.675275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.675367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.675394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.675510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.675554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.675701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.675728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.675851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.675881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.676037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.676066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.676187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.676215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.676365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.676406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.676558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.676584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.676700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.676727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.676850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.676877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.676994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.677021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.677152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.677179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.677314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.677341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.677461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.677492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.677634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.677660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.677782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.677809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.677953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.677988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.678142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.678170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.678264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.678292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.678415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.678442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.678562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.678589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.678739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.678784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.678882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.678912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.679025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.679052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.679181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.679209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.679301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.679328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.679449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.679476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.679625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.679655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.679788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.679817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.679945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.679990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.570 [2024-11-20 00:00:23.680129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.570 [2024-11-20 00:00:23.680174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.570 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.680273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.680300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.680425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.680451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.680571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.680598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.680723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.680749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.680842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.680870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.680994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.681022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.681207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.681238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.681371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.681398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.681530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.681559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.681707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.681736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.681848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.681874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.681980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.682008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.682178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.682218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.682349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.682378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.682523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.682551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.682651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.682688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.682844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.682870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.682995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.683021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.683213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.683242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.683333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.683360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.683492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.683519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.683677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.683703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.683848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.683875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.683968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.683996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.684146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.684173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.684299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.684330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.684458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.684500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.684599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.684643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.684763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.684788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.684932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.684960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.685092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.685121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.685230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.685255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.685402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.685429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.685576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.685606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.685750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.685777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.685900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.685927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.686078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.686109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.571 [2024-11-20 00:00:23.686218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.571 [2024-11-20 00:00:23.686245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.571 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.686365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.686392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.686541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.686571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.686736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.686762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.686878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.686904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.687033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.687092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.687236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.687264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.687389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.687415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.687584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.687643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.687749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.687774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.687901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.687927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.688037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.688063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.688190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.688215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.688366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.688410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.688566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.688595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.688754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.688785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.688961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.688989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.689128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.689157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.689291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.689318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.689442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.689468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.689642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.689671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.689787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.689813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.689940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.689965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.690080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.690106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.690226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.690252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.690356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.690384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.690478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.690504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.690602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.690629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.690748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.690774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.690899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.690944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.691058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.691116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.691286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.691314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.691449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.691480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.691652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.691680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.691801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.691829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.691981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.692011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.692157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.572 [2024-11-20 00:00:23.692183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.572 qpair failed and we were unable to recover it. 00:35:49.572 [2024-11-20 00:00:23.692277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.692302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.692443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.692472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.692591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.692616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.692711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.692738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.692831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.692859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.692981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.693013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.693140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.693167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.693255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.693283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.693433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.693459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.693602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.693631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.693728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.693758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.693903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.693928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.694017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.694042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.694193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.694218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.694348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.694374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.694490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.694516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.694652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.694691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.694839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.694865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.695001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.695047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.695171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.695197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.695312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.695338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.695460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.695485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.695668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.695693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.695787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.695813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.695935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.695960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.696134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.696174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.696301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.696330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.696504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.696534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.696694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.696725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.696879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.696908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.697046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.697095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.697235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.697263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.697384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.697416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.697545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.697571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.697688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.573 [2024-11-20 00:00:23.697713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.573 qpair failed and we were unable to recover it. 00:35:49.573 [2024-11-20 00:00:23.697860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.697886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.698002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.698027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.698125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.698152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.698268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.698294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.698411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.698438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.698603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.698629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.698774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.698801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.698925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.698968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.699061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.699097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.699204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.699230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.699364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.699390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.699518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.699545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.699658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.699686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.699833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.699860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.700015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.700048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.700221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.700248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.700380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.700407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.700554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.700581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.700746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.700773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.700889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.700915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.701035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.701085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.701204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.701231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.701329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.701355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.701479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.701505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.701630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.701660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.701812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.701839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.702010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.702039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.702160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.702187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.702285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.702311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.702450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.702476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.702571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.702597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.702745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.702770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.702917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.702961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.703110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.703140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.703269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.703312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.703467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.703504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.703596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.703623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.574 [2024-11-20 00:00:23.703745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.574 [2024-11-20 00:00:23.703771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.574 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.703915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.703942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.704035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.704061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.704191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.704217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.704334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.704360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.704480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.704505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.704596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.704622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.704765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.704797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.704957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.704987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.705133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.705161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.705262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.705289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.705410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.705436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.705536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.705563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.705687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.705714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.705833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.705864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.705997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.706024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.706153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.706180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.706295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.706321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.706475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.706520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.706643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.706672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.706817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.706843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.706931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.706957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.707080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.707109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.707227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.707254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.707352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.707380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.707475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.707501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.707648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.707675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.707793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.707820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.707920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.707947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.708044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.708101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.708273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.708303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.708416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.708463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.708570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.708599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.708737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.708764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.708885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.708914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.709013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.709040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.709177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.709204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.709340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.575 [2024-11-20 00:00:23.709366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.575 qpair failed and we were unable to recover it. 00:35:49.575 [2024-11-20 00:00:23.709492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.709521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.709650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.709680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.709809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.709838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.709936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.709974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.710147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.710175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.710325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.710355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.710511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.710541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.710648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.710678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.710835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.710867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.710973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.711000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.711149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.711177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.711267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.711293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.711438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.711467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.711564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.711593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.711726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.711756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.711849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.711879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.712039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.712075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.712225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.712252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.712410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.712439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.712555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.712586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.712773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.712804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.712944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.712971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.713125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.713153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.713248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.713294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.713425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.713456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.713618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.713649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.713819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.713865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.714015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.714042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.714153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.714179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.714272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.714299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.714466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.714513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.714655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.714700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.714834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.714879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.714998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.715025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.715180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.715225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.715323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.715349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.715434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.715461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.715587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.576 [2024-11-20 00:00:23.715614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.576 qpair failed and we were unable to recover it. 00:35:49.576 [2024-11-20 00:00:23.715707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.715734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.715831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.715858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.715985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.716011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.716134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.716162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.716281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.716307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.716428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.716456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.716551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.716596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.716692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.716723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.716842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.716872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.717008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.717037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.717171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.717203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.717357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.717388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.717498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.717527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.717644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.717676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.717816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.717843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.717968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.717994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.718111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.718138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.718260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.718288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.718384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.718411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.718506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.718533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.718652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.718680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.718817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.718856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.718991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.719031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.719150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.719180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.719329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.719356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.719502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.719529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.719651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.719678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.719807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.719834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.719928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.719954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.720081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.720109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.720219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.720264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.720432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.720475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.577 [2024-11-20 00:00:23.720613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.577 [2024-11-20 00:00:23.720663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.577 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.720807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.720834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.720922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.720950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.721077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.721104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.721190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.721216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.721330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.721369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.721543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.721575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.721734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.721765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.721910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.721938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.722061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.722104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.722224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.722254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.722360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.722391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.722530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.722559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.722689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.722719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.722879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.722911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.723088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.723116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.723203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.723230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.723344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.723388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.723559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.723604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.723762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.723805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.723919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.723950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.724059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.724113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.724278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.724307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.724433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.724463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.724577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.724604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.724743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.724774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.724902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.724933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.725112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.725140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.725256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.725304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.725417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.725462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.725588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.725615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.725732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.725759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.725861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.725900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.725997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.726026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.726168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.726196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.726323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.726350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.726493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.726522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.726651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.578 [2024-11-20 00:00:23.726682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.578 qpair failed and we were unable to recover it. 00:35:49.578 [2024-11-20 00:00:23.726803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.726832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.726981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.727008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.727142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.727178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.727307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.727337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.727516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.727559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.727643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.727670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.727820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.727848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.727970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.727997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.728139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.728178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.728324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.728355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.728519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.728550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.728643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.728672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.728788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.728815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.728947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.728975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.729078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.729106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.729211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.729240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.729394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.729420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.729524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.729568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.729672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.729699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.729826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.729853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.729944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.729972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.730063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.730096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.730218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.730245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.730392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.730418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.730502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.730528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.730622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.730648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.730758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.730785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.730904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.730931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.731026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.731053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.731157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.731188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.731299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.731327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.731455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.731482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.731606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.731633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.731747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.731775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.731898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.731925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.732046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.732077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.732228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.732254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.732377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.732404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.732493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.732537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.732667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.732696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.732832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.579 [2024-11-20 00:00:23.732861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.579 qpair failed and we were unable to recover it. 00:35:49.579 [2024-11-20 00:00:23.732990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.733032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.733171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.733198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.733325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.733370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.733518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.733544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.733685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.733715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.733816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.733845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.733948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.733976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.734144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.734184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.734305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.734337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.734491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.734539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.734684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.734712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.734838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.734865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.734953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.734980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.735087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.735115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.735215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.735241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.735375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.735415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.735589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.735621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.735763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.735793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.735939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.735966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.736103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.736134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.736249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.736280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.736447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.736477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.736573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.736604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.736741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.736772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.736909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.736940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.737055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.737091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.737193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.737220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.737356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.737400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.737543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.737571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.737706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.737750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.737867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.737895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.737996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.738023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.738143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.738173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.738309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.738338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.738451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.738481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.738577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.738606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.738772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.738799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.738895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.738921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.739044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.739078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.739227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.739253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.739339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.580 [2024-11-20 00:00:23.739365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.580 qpair failed and we were unable to recover it. 00:35:49.580 [2024-11-20 00:00:23.739544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.739574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.739746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.739776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.739912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.739944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.740059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.740116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.740243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.740287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.740415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.740445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.740577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.740608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.740734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.740764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.740891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.740921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.741075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.741103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.741219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.741246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.741358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.741385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.741501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.741544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.741681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.741711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.741838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.741872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.742013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.742042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.742187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.742214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.742331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.742357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.742510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.742536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.742653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.742694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.742837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.742868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.743001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.743031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.743174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.743202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.743353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.743379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.743528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.743559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.743686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.743716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.743875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.743905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.744063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.744116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.744242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.744269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.744433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.744462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.744569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.744598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.744760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.744789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.744920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.744951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.745132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.745160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.745277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.745304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.745427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.745471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.745610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.745639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.745794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.745824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.745949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.745976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.581 qpair failed and we were unable to recover it. 00:35:49.581 [2024-11-20 00:00:23.746128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.581 [2024-11-20 00:00:23.746155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.746243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.746269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.746418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.746463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.746594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.746623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.746759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.746853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.746987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.747017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.747171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.747198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.747319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.747345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.747441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.747467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.747555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.747596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.747786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.747816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.747953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.747985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.748097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.748142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.748269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.748296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.748420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.748463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.748610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.748641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.748768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.748810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.748940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.748970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.749082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.749125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.749249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.749275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.749376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.749402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.749514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.749543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.749676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.749719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.749823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.582 [2024-11-20 00:00:23.749852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.582 qpair failed and we were unable to recover it. 00:35:49.582 [2024-11-20 00:00:23.749977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.750006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.750146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.750186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.750309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.750338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.750477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.750522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.750689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.750720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.750851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.750883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.751040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.751067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.751201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.751229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.751316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.751343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.751446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.751473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.751618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.751644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.751731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.751758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.751856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.751883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.752002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.752030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.752137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.752164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.752275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.752302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.752469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.752515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.752656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.752699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.752802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.752829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.752929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.752956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.753049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.753084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.753204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.753231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.753325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.753352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.753475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.753503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.753626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.753652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.753803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.583 [2024-11-20 00:00:23.753831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.583 qpair failed and we were unable to recover it. 00:35:49.583 [2024-11-20 00:00:23.753942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.753982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.754118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.754147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.754245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.754273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.754396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.754424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.754550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.754577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.754698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.754725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.754870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.754902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.755053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.755090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.755228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.755258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.755381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.755411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.755552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.755582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.755687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.755717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.755846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.755875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.755974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.756004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.756159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.756188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.756322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.756365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.756547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.756578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.756709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.756738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.756837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.756866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.757010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.757036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.757138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.757166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.757259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.757287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.757423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.757453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.757588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.757618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.757726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.757753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.757878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.757910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.758040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.758080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.758206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.758237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.758406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.758436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.758567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.758597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.758710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.758756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.758902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.758929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.759056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.759089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.759214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.759241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.759363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.759390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.759502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.759541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.759639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.759668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.584 qpair failed and we were unable to recover it. 00:35:49.584 [2024-11-20 00:00:23.759788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.584 [2024-11-20 00:00:23.759816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.759910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.759938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.760093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.760121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.760220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.760248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.760392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.760438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.760583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.760616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.760782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.760813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.760929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.760955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.761091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.761118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.761242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.761275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.761434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.761460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.761606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.761648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.761762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.761792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.761916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.761945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.762121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.762151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.762309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.762337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.762466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.762495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.762627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.762687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.762836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.762890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.763015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.763043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.763167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.763214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.763353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.763382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.763537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.763581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.763698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.763740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.763889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.763916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.764016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.764042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.764173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.764200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.764294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.764321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.764410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.764438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.764538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.764567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.764696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.764725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.764924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.764951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.765102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.765129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.765254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.765281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.765393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.765423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.765538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.765580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.765744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.765779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.765884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.765914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.585 [2024-11-20 00:00:23.766043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.585 [2024-11-20 00:00:23.766076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.585 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.766201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.766227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.766369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.766397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.766534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.766579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.766707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.766736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.766873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.766916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.767040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.767087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.767220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.767249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.767373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.767401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.767544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.767589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.767724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.767768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.767892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.767921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.768088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.768146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.768261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.768291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.768405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.768437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.768570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.768601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.768733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.768764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.768894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.768924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.769085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.769142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.769266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.769294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.769399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.769430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.769559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.769590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.769777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.769821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.769967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.769993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.770138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.770187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.770302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.770339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.770504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.770532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.770670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.770714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.770832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.770859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.770981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.771009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.771166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.771206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.771317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.771347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.771470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.771498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.771668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.771699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.771832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.771863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.771993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.772023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.772140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.772168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.772289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.772316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.586 qpair failed and we were unable to recover it. 00:35:49.586 [2024-11-20 00:00:23.772502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.586 [2024-11-20 00:00:23.772531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.772674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.772704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.772834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.772864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.773042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.773108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.773221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.773250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.773375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.773402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.773519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.773548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.773656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.773686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.773799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.773828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.773983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.774013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.774131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.774158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.774282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.774309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.774444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.774474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.774599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.774629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.774786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.774812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.774923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.774967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.775095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.775123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.775249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.775277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.775400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.775427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.775542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.775571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.775681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.775711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.775887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.775917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.776054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.776087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.776209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.776235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.776327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.776353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.776461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.776491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.776590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.776619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.776749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.776778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.776946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.776975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.777121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.777149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.777269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.777296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.587 [2024-11-20 00:00:23.777396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.587 [2024-11-20 00:00:23.777425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.587 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.777519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.777549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.777688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.777721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.777883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.777941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.778064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.778102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.778226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.778254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.778371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.778416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.778554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.778598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.778706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.778737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.778875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.778902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.779059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.779093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.779209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.779237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.779331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.779358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.779461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.779489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.779613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.779642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.779755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.779784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.779918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.779946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.780048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.780097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.780236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.780266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.780398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.780427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.780562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.780593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.780716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.780759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.780887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.780914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.781031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.781064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.781204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.781232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.781331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.781359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.781516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.781544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.781759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.781788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.781933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.781960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.782091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.782118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.782227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.782257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.782385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.782415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.782538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.782567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.782695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.782724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.782854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.782883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.782987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.783018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.783180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.783208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.588 [2024-11-20 00:00:23.783377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.588 [2024-11-20 00:00:23.783407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.588 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.783505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.783536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.783644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.783671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.783846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.783876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.783976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.784006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.784148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.784189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.784316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.784345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.784454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.784500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.784641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.784687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.784807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.784835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.784929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.784957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.785117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.785145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.785244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.785270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.785414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.785445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.785547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.785576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.785735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.785763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.785873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.785899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.785995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.786023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.786188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.786216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.786322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.786352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.786456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.786485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.786616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.786645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.786743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.786773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.786881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.786912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.787024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.787053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.787238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.787267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.787360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.787388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.787558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.787588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.787740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.787784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.787878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.787906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.788002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.788029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.788144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.788175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.788301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.788330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.788435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.788464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.788584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.788613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.788738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.788766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.788913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.788939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.789055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.789087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.589 [2024-11-20 00:00:23.789213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.589 [2024-11-20 00:00:23.789239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.589 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.789330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.789355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.789466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.789503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.789672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.789703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.789838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.789867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.789996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.790027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.790184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.790211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.790328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.790372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.790503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.790533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.790724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.790753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.790885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.790914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.791010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.791052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.791167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.791194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.791319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.791346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.791546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.791575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.791796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.791826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.791941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.791970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.792084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.792129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.792224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.792252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.792364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.792394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.792522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.792551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.792674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.792703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.792806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.792835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.792956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.792985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.793164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.793204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.793328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.793379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.793501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.793546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.793714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.793744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.793852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.793880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.793999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.794034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.794143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.794171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.794317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.794344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.794464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.794490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.794615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.794645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.794805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.794834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.794979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.795035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.795202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.795233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.795428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.795477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.590 [2024-11-20 00:00:23.795589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.590 [2024-11-20 00:00:23.795639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.590 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.795761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.795789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.795934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.795961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.796056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.796090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.796242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.796272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.796382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.796412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.796548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.796577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.796675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.796704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.796839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.796867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.796983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.797012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.797168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.797212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.797355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.797388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.797522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.797554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.797655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.797686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.797798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.797830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.797973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.798003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.798137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.798169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.798333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.798383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.798558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.798590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.798725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.798755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.798911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.798942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.799083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.799130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.799257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.799284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.799437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.799471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.799624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.799653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.799772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.799802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.799940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.799971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.800121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.800161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.800260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.800289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.800397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.800428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.800557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.800606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.800735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.800784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.800904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.800935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.801034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.591 [2024-11-20 00:00:23.801064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.591 qpair failed and we were unable to recover it. 00:35:49.591 [2024-11-20 00:00:23.801202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.801229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.801327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.801353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.801446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.801473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.801564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.801593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.801686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.801713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.801829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.801855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.801977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.802005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.802137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.802164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.802299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.802329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.802446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.802474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.802569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.802595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.802716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.802743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.802834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.802859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.802980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.803015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.803098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.803126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.803214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.803241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.803332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.803376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.803535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.803565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.803686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.803715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.803816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.803845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.804009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.804036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.804141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.804170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.804291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.804336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.804466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.804497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.804666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.804702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.804835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.804866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.804977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.805004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.805110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.805138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.805227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.805253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.805363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.805392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.805489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.805518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.805621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.805651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.805751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.805780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.805940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.805969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.806078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.806132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.806251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.806276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.806401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.806442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.592 [2024-11-20 00:00:23.806583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.592 [2024-11-20 00:00:23.806625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.592 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.806757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.806786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.806888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.806917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.807023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.807052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.807233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.807260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.807352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.807400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.807535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.807566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.807723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.807753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.807846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.807876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.807984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.808027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.808153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.808181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.808271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.808299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.808487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.808514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.593 [2024-11-20 00:00:23.808661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.593 [2024-11-20 00:00:23.808690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.593 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.808778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.808815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.808976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.809006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.809173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.809213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.809368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.809397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.809561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.809592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.809694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.809724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.809844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.809884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.810003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.810048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.810179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.810208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.810297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.810323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.810435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.810462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.810555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.810601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.810709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.810739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.810855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.810909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.811047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.811085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.811204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.811231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.811356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.811383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.811499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.811531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.811666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.811695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.811832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.811861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.811977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.812013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.812136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.812167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.812286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.812318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.812468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.812513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.812646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.812691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.812811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.812838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.812933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.812961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.813098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.813127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.813249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.813275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.813398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.813425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.813515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.813559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.813655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.813684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.813811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.813841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.813938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.813968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.814083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.814110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.880 qpair failed and we were unable to recover it. 00:35:49.880 [2024-11-20 00:00:23.814267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.880 [2024-11-20 00:00:23.814296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.814404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.814433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.814538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.814568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.814726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.814771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.814863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.814891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.814996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.815025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.815196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.815226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.815358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.815388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.815551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.815582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.815825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.815889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.816049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.816090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.816223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.816268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.816443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.816489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.816612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.816657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.816748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.816775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.816889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.816916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.817029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.817057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.817169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.817196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.817315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.817341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.817484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.817513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.817613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.817643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.817739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.817768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.817898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.817924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.818021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.818048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.818174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.818200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.818307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.818355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.818502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.818548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.818661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.818706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.818834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.818861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.818957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.818984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.819101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.819129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.819246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.819272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.819371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.819403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.819502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.819528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.819630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.819658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.819751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.819778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.819863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.819888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.820001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.881 [2024-11-20 00:00:23.820027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.881 qpair failed and we were unable to recover it. 00:35:49.881 [2024-11-20 00:00:23.820116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.820144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.820235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.820262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.820372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.820402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.820509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.820538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.820644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.820673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.820807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.820843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.820940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.820983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.821081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.821117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.821261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.821290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.821429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.821458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.821560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.821589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.821767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.821812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.821936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.821964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.822109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.822136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.822253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.822298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.822420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.822446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.822538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.822565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.822686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.822713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.822836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.822863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.822970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.823009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.823118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.823147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.823267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.823298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.823441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.823467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.823551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.823578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.823687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.823717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.823828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.823858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.823966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.823993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.824094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.824122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.824242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.824269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.824386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.824413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.824540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.824570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.824702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.824731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.824827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.824857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.825004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.825031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.825179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.825220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.825345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.825374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.882 qpair failed and we were unable to recover it. 00:35:49.882 [2024-11-20 00:00:23.825492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.882 [2024-11-20 00:00:23.825523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.825662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.825707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.825823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.825853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.825997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.826024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.826137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.826164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.826279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.826307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.826426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.826452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.826543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.826569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.826692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.826718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.826818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.826844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.826966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.826993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.827081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.827108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.827225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.827259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.827422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.827468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.827559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.827587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.827710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.827741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.827873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.827901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.828008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.828048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.828219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.828252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.828357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.828387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.828491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.828519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.828657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.828687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.828793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.828822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.828965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.828993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.829095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.829124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.829213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.829240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.829392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.829437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.829552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.829583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.829742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.829785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.829910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.829937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.830062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.830093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.830207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.830236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.830357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.830389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.830521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.883 [2024-11-20 00:00:23.830552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.883 qpair failed and we were unable to recover it. 00:35:49.883 [2024-11-20 00:00:23.830687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.830717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.830854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.830885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.831033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.831061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.831180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.831210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.831367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.831412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.831553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.831604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.831696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.831722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.831846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.831875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.831972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.832000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.832122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.832150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.832301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.832331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.832448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.832480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.832585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.832615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.832772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.832822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.832918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.832945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.833029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.833057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.833161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.833189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.833277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.833304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.833419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.833445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.833544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.833572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.833692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.833719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.833834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.833861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.833982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.834009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.834115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.834143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.834230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.834257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.834427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.834473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.834599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.834630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.834802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.834846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.834972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.834999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.835124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.835152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.835291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.884 [2024-11-20 00:00:23.835321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.884 qpair failed and we were unable to recover it. 00:35:49.884 [2024-11-20 00:00:23.835418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.835448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.835564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.835595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.835787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.835833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.835925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.835952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.836075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.836103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.836211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.836257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.836403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.836448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.836616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.836664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.836759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.836787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.836908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.836935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.837058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.837096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.837240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.837271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.837416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.837462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.837610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.837642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.837776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.837815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.837921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.837948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.838066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.838102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.838265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.838295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.838420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.838463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.838607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.838637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.838766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.838796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.838905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.838935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.839092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.839120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.839289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.839319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.839453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.839482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.839623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.839653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.839813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.839860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.839995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.840035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.840163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.840192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.840313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.840359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.840474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.840501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.840628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.840658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.840778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.840807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.840916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.840942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.841083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.841112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.885 qpair failed and we were unable to recover it. 00:35:49.885 [2024-11-20 00:00:23.841239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.885 [2024-11-20 00:00:23.841267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.841446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.841476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.841609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.841638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.841766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.841809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.841939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.841968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.842085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.842129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.842220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.842248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.842380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.842419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.842571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.842617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.842758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.842802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.842952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.842980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.843102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.843130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.843271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.843315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.843451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.843481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.843598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.843625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.843761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.843805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.843928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.843955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.844052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.844084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.844172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.844198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.844331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.844370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.844533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.844562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.844654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.844683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.844823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.844853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.844974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.845000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.845131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.845160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.845259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.845289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.845429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.845465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.845619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.845663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.845753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.845780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.845907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.845935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.846030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.846058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.846161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.846191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.846313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.846340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.846463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.846490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.846604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.846631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.886 [2024-11-20 00:00:23.846779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.886 [2024-11-20 00:00:23.846805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.886 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.846953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.846979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.847097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.847123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.847269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.847293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.847426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.847454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.847594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.847620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.847759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.847787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.847884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.847912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.848037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.848066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.848211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.848235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.848365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.848392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.848537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.848567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.848738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.848766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.848891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.848919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.849044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.849084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.849221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.849246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.849373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.849401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.849518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.849545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.849698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.849726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.849850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.849878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.850033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.850081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.850205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.850231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.850322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.850347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.850496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.850524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.850635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.850661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.850812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.850856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.850969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.851001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.851105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.851149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.851244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.851271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.851367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.851395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.851540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.851571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.851738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.851767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.851913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.851957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.887 qpair failed and we were unable to recover it. 00:35:49.887 [2024-11-20 00:00:23.852062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.887 [2024-11-20 00:00:23.852093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.852181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.852207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.852305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.852331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.852443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.852487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.852600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.852626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.852804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.852834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.852963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.852993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.853161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.853202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.853359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.853399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.853550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.853583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.853718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.853748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.853890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.853920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.854096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.854139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.854241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.854267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.854412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.854441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.854605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.854633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.854761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.854789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.854931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.854964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.855105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.855133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.855293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.855343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.855463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.855490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.855604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.855631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.855754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.855782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.855873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.855901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.856022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.856049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.856159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.856185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.856312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.856341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.856443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.856473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.856595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.856624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.856728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.856757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.856857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.856899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.856993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.857036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.857189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.857220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.857355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.857385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.857485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.857514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.857639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.857667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.888 [2024-11-20 00:00:23.857834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.888 [2024-11-20 00:00:23.857863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.888 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.857994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.858024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.858169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.858195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.858366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.858395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.858528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.858557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.858685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.858714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.858851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.858880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.859011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.859042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.859208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.859248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.859393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.859440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.859595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.859640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.859778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.859821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.859975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.860002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.860097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.860124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.860284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.860315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.860442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.860470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.860610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.860639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.860742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.860768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.860917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.860943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.861088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.861115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.861220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.861250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.861387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.861416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.861517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.861546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.861700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.861752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.861849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.861876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.861961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.861989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.862085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.862113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.862205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.862232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.862355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.862382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.862504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.862531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.889 [2024-11-20 00:00:23.862653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.889 [2024-11-20 00:00:23.862679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.889 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.862767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.862794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.862910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.862937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.863054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.863090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.863264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.863293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.863396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.863425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.863580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.863609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.863717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.863747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.863855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.863883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.864007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.864034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.864182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.864227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.864362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.864405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.864541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.864586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.864709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.864736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.864854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.864881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.864976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.865003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.865150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.865177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.865324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.865351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.865498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.865525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.865653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.865680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.865803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.865835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.865930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.865956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.866113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.866143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.866285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.866328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.866456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.866485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.866620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.866651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.866815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.866844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.866950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.866979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.867126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.867154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.867300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.867329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.867488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.867516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.867623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.867652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.867809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.867855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.867976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.868003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.868139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.868186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.868352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.868398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.868531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.868560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.868720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.868750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.890 [2024-11-20 00:00:23.868889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.890 [2024-11-20 00:00:23.868916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.890 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.869063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.869094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.869189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.869216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.869318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.869344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.869435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.869461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.869562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.869588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.869748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.869795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.869911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.869938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.870082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.870110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.870243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.870296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.870409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.870454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.870639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.870667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.870784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.870811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.870925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.870951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.871054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.871089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.871206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.871232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.871351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.871377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.871475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.871502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.871613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.871639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.871753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.871779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.871877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.871903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.871989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.872018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.872159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.872204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.872324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.872356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.872493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.872520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.872656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.872704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.872796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.872823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.872970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.872998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.873147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.873177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.873307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.873336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.873439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.873468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.873621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.873650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.873785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.873814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.873954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.873981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.874101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.874127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.874246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.874274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.874415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.874460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.874627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.874671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.891 [2024-11-20 00:00:23.874816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.891 [2024-11-20 00:00:23.874862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.891 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.875011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.875038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.875137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.875165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.875291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.875318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.875440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.875466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.875610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.875654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.875784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.875811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.875961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.875988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.876126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.876157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.876316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.876345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.876482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.876511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.876666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.876696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.876801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.876830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.876942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.876968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.877091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.877118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.877209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.877236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.877326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.877354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.877471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.877516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.877640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.877671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.877863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.877908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.878023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.878050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.878203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.878247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.878386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.878430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.878592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.878637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.878760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.878787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.878914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.878941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.879059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.879099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.879224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.879251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.879376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.879403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.879544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.879570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.879670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.879698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.879826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.879854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.879946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.879974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.880078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.880105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.880224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.880252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.880337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.880364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.880495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.880521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.880687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.892 [2024-11-20 00:00:23.880716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.892 qpair failed and we were unable to recover it. 00:35:49.892 [2024-11-20 00:00:23.880871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.880905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.881014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.881044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.881147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.881193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.881321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.881367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.881508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.881552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.881687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.881732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.881851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.881878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.882028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.882056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.882197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.882225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.882351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.882377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.882460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.882486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.882638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.882665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.882750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.882793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.882927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.882956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.883100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.883128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.883242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.883268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.883405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.883434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.883540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.883583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.883690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.883719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.883823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.883852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.884020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.884046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.884169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.884195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.884289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.884315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.884449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.884478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.884634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.884663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.884796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.884826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.884971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.885011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.885142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.885178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.885365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.885410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.885550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.885595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.885728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.885771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.885899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.885927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.886088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.886117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.886235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.886261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.886364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.886390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.886504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.886534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.886665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.886694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.886795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.893 [2024-11-20 00:00:23.886825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.893 qpair failed and we were unable to recover it. 00:35:49.893 [2024-11-20 00:00:23.886936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.886963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.887106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.887133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.887249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.887275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.887449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.887479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.887635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.887664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.887790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.887836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.888026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.888053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.888146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.888172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.888265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.888292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.888451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.888534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.888629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.888657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.888801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.888846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.888963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.888989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.889137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.889165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.889266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.889294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.889413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.889439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.889541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.889576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.889700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.889727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.889844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.889871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.889963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.889989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.890111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.890156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.890311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.890341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.890466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.890494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.890608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.890635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.890745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.890775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.890869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.890898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.891038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.891080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.891234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.891263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.891378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.891404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.891546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.891576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.891701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.891731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.891859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.891888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.892019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.892048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.892197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.894 [2024-11-20 00:00:23.892226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.894 qpair failed and we were unable to recover it. 00:35:49.894 [2024-11-20 00:00:23.892362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.892408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.892554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.892598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.892742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.892786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.892899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.892926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.893103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.893135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.893291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.893322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.893457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.893486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.893620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.893649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.893812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.893841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.893964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.894012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.894111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.894138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.894271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.894301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.894398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.894427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.894563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.894592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.894703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.894748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.894887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.894914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.895064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.895100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.895240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.895285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.895421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.895465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.895573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.895603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.895742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.895769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.895918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.895944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.896111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.896141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.896359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.896389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.896493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.896522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.896659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.896689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.896872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.896916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.897084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.897130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.897243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.897273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.897502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.897545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.897685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.897731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.897858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.897885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.898013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.898041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.898225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.898256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.898360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.898388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.898490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.898519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.898618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.898664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.895 [2024-11-20 00:00:23.898786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.895 [2024-11-20 00:00:23.898816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.895 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.898971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.898997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.899119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.899146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.899265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.899291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.899399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.899428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.899531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.899560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.899695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.899724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.899853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.899882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.900013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.900043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.900171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.900204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.900381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.900428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.900595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.900641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.900812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.900858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.900986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.901013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.901129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.901157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.901267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.901298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.901460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.901489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.901589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.901619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.901758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.901784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.901905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.901932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.902048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.902081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.902178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.902204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.902295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.902322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.902445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.902471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.902608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.902658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.902765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.902795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.902934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.902966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.903053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.903091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.903260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.903290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.903419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.903449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.903606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.903652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.903780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.903806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.903928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.903955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.904048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.904084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.904189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.904216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.904325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.904355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.904468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.904494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.904654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.904680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.904796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.896 [2024-11-20 00:00:23.904823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.896 qpair failed and we were unable to recover it. 00:35:49.896 [2024-11-20 00:00:23.904940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.904966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.905118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.905145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.905232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.905276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.905446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.905475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.905613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.905642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.905801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.905830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.905976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.906004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.906100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.906128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.906267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.906313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.906450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.906540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.906687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.906732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.906833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.906860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.906981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.907007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.907144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.907171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.907293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.907320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.907418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.907444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.907565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.907593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.907688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.907714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.907812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.907839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.907941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.907968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.908060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.908094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.908213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.908239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.908360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.908387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.908508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.908534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.908621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.908649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.908775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.908802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.908922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.908948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.909077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.909105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.909227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.909254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.909397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.909424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.909550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.909578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.909698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.909724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.909819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.909846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.909969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.909996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.910118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.910145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.910235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.910261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.910372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.910402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.910531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.897 [2024-11-20 00:00:23.910561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.897 qpair failed and we were unable to recover it. 00:35:49.897 [2024-11-20 00:00:23.910682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.910713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.910819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.910848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.911014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.911042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.911182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.911210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.911321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.911348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.911476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.911510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.911635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.911664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.911752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.911779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.911899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.911926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.912048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.912084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.912197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.912224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.912331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.912361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.912498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.912527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.912690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.912719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.912814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.912844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.912973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.913002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.913212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.913246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.913371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.913398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.913515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.913559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.913655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.913685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.913791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.913834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.913967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.913997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.914115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.914144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.914230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.914257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.914367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.914397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.914521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.914551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.914698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.914757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.914932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.914979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.915090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.915119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.915230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.915262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.915462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.915511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.915609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.915639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.915777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.915805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.915899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.915925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.916027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.916066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.916215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.898 [2024-11-20 00:00:23.916243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.898 qpair failed and we were unable to recover it. 00:35:49.898 [2024-11-20 00:00:23.916358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.916386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.916497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.916529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.916663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.916693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.916834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.916864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.917030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.917057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.917169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.917199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.917295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.917322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.917497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.917528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.917657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.917687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.917823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.917853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.918040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.918090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.918194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.918222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.918346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.918372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.918533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.918562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.918692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.918736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.918873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.918902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.919032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.919062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.919224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.919250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.919372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.919415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.919550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.919580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.919697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.919739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.919902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.919932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.920058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.920099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.920205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.920232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.920356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.920381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.920549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.920578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.920698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.920727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.920836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.920865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.921019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.921048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.921182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.921222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.921326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.921356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.921445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.921473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.921614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.921658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.921801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.921847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.921945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.921973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.899 qpair failed and we were unable to recover it. 00:35:49.899 [2024-11-20 00:00:23.922125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.899 [2024-11-20 00:00:23.922157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.922292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.922321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.922446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.922475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.922603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.922632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.922768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.922798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.922928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.922957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.923092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.923120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.923261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.923291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.923446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.923491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.923602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.923646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.923764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.923791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.923911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.923938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.924061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.924093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.924199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.924227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.924412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.924455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.924627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.924657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.924816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.924842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.924961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.924988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.925147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.925177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.925313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.925343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.925454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.925500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.925661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.925706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.925805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.925832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.925942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.925969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.926103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.926130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.926226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.926252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.926376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.926402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.926524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.926552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.926639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.926667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.926787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.926815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.926908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.926934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.927047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.927084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.927280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.927309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.927435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.927465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.927599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.927628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.927766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.927792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.927918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.927944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.928047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.928082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.900 qpair failed and we were unable to recover it. 00:35:49.900 [2024-11-20 00:00:23.928198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.900 [2024-11-20 00:00:23.928227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.928334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.928363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.928481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.928508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.928645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.928673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.928801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.928831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.928930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.928959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.929123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.929150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.929242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.929268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.929376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.929405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.929597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.929626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.929743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.929772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.929874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.929917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.930038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.930064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.930164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.930190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.930318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.930360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.930527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.930556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.930690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.930719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.930850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.930880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.931106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.931146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.931292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.931339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.931479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.931510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.931668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.931712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.931830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.931857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.931984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.932012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.932166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.932197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.932328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.932357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.932455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.932484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.932580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.932609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.932714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.932756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.932909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.932936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.933058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.933094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.933247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.933275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.933375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.933402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.933543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.933588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.933683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.933710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.933811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.933839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.933965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.933991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.934105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.934132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.901 [2024-11-20 00:00:23.934237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.901 [2024-11-20 00:00:23.934264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.901 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.934362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.934388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.934534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.934561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.934685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.934712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.934809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.934837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.934976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.935016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.935160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.935191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.935331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.935360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.935505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.935532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.935684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.935714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.935816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.935847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.935954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.935984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.936142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.936171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.936343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.936372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.936500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.936530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.936630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.936660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.936821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.936871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.936997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.937029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.937133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.937161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.937277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.937321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.937444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.937471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.937586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.937613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.937766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.937792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.937891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.937930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.938028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.938057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.938190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.938217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.938351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.938380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.938513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.938543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.938702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.938732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.938844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.938889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.939047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.939098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.939273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.939302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.939437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.939484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.939630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.939677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.939801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.939828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.939919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.939948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.940079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.940107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.902 [2024-11-20 00:00:23.940238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.902 [2024-11-20 00:00:23.940276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.902 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.940385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.940416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.940601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.940644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.940787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.940833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.940954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.940982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.941131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.941172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.941305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.941335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.941469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.941499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.941607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.941638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.941760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.941786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.941904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.941930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.942029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.942055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.942168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.942197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.942296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.942325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.942494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.942523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.942619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.942649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.942760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.942786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.942932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.942961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.943126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.943153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.943300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.943326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.943448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.943477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.943594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.943637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.943777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.943820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.943952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.943981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.944125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.944153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.944240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.944267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.944372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.944401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.944487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.944516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.944652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.944681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.944810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.944839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.944975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.945001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.945118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.945145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.945263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.945289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.945443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.945501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.945643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.903 [2024-11-20 00:00:23.945700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.903 qpair failed and we were unable to recover it. 00:35:49.903 [2024-11-20 00:00:23.945865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.945910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.946038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.946065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.946177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.946207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.946367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.946411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.946531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.946559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.946679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.946706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.946809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.946834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.946960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.946986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.947092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.947120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.947244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.947270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.947406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.947435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.947566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.947595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.947686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.947715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.947831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.947858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.947981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.948008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.948176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.948207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.948361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.948406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.948522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.948550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.948658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.948686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.948784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.948813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.948947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.948986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.949094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.949124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.949241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.949269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.949389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.949416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.949513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.949539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.949709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.949756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.949853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.949881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.950005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.950032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.950187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.950215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.950379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.950409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.950520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.950548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.950676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.950709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.950866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.950897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.951039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.951067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.951251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.951297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.951412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.951458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.951577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.904 [2024-11-20 00:00:23.951604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.904 qpair failed and we were unable to recover it. 00:35:49.904 [2024-11-20 00:00:23.951695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.951722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.951813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.951839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.951967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.952002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.952120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.952148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.952297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.952323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.952444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.952472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.952597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.952624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.952749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.952776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.952920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.952947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.953037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.953065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.953192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.953222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.953323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.953351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.953491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.953517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.953660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.953686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.953780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.953806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.953925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.953951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.954043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.954075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.954223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.954252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.954348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.954377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.954497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.954526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.954636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.954665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.954764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.954793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.954914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.954941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.955065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.955100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.955224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.955251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.955333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.955376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.955475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.955503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.955628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.955658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.955756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.955785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.955878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.955912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.956057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.956093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.956191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.956219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.956359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.956389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.956544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.956588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.956725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.956769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.956892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.956919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.957033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.957061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.957186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.905 [2024-11-20 00:00:23.957213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.905 qpair failed and we were unable to recover it. 00:35:49.905 [2024-11-20 00:00:23.957301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.957346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.957491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.957517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.957605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.957631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.957721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.957765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.957877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.957904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.958023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.958049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.958190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.958219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.958309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.958338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.958434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.958463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.958576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.958606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.958717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.958744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.958862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.958889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.958996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.959024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.959139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.959185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.959293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.959323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.959517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.959562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.959699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.959729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.959875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.959913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.960045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.960087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.960202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.960233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.960357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.960387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.960553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.960582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.960684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.960713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.960826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.960853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.960963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.960992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.961084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.961112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.961242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.961287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.961375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.961402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.961519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.961548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.961671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.961698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.961821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.961848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.961943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.961970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.962122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.962149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.962263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.962289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.962505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.962544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.962667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.962713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.962844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.962871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.962992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.906 [2024-11-20 00:00:23.963019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.906 qpair failed and we were unable to recover it. 00:35:49.906 [2024-11-20 00:00:23.963143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.963188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.963326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.963370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.963513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.963543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.963667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.963696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.963825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.963854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.963959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.963986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.964083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.964110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.964254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.964288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.964427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.964456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.964578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.964607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.964754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.964783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.964891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.964919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.965087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.965131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.965274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.965319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.965432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.965477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.965613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.965658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.965746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.965774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.965867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.965895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.966012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.966039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.966188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.966235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.966348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.966392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.966521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.966548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.966670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.966697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.966783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.966809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.966929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.966956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.967084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.967112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.967252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.967299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.967459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.967504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.967626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.967653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.967753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.967781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.967907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.967933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.968057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.968095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.968227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.968253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.968369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.968399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.968501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.968531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.968738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.968767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.968866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.968906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.969057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.907 [2024-11-20 00:00:23.969094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.907 qpair failed and we were unable to recover it. 00:35:49.907 [2024-11-20 00:00:23.969241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.969272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.969432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.969461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.969624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.969668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.969807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.969852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.969945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.969971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.970075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.970103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.970228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.970255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.970349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.970376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.970510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.970538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.970658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.970684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.970823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.970863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.971000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.971030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.971196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.971224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.971336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.971364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.971495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.971524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.971687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.971717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.971843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.971872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.971998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.972027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.972178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.972205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.972357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.972387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.972509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.972538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.972674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.972703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.972859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.972888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.973027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.973056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.973211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.973238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.973376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.973423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.973571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.973617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.973730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.973774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.973923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.973949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.974120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.974152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.974293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.974319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.974450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.974477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.974602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.974628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.908 [2024-11-20 00:00:23.974743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.908 [2024-11-20 00:00:23.974769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.908 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.974888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.974914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.975033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.975059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.975181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.975208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.975330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.975376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.975509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.975555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.975693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.975738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.975857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.975883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.975982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.976010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.976151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.976191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.976315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.976343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.976472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.976499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.976596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.976624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.976743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.976771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.976887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.976913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.977009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.977036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.977140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.977167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.977268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.977295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.977440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.977466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.977601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.977648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.977744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.977771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.977892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.977920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.978034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.978060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.978168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.978195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.978304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.978333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.978427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.978455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.978585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.978614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.978794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.978840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.978990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.979018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.979189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.979239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.979378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.979428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.979564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.979609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.979776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.979806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.979916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.979943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.980059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.980092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.980184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.980211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.980356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.980387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.980543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.980573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.909 [2024-11-20 00:00:23.980671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.909 [2024-11-20 00:00:23.980701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.909 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.980803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.980831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.980935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.980963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.981067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.981118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.981260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.981290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.981410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.981439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.981568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.981595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.981763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.981795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.981937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.981964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.982084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.982111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.982231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.982259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.982378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.982406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.982526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.982552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.982643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.982670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.982794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.982822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.982972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.982999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.983116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.983143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.983240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.983268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.983356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.983383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.983484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.983518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.983643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.983673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.983812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.983839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.983936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.983965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.984050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.984087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.984223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.984267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.984437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.984482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.984597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.984641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.984773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.984802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.984951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.984979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.985079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.985106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.985228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.985272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.985441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.985470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.985637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.985666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.985808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.985838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.985953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.985982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.986093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.986151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.986301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.986333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.986461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.986491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.910 [2024-11-20 00:00:23.986622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.910 [2024-11-20 00:00:23.986652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.910 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.986755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.986784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.986920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.986950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.987057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.987093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.987240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.987266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.987383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.987426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.987601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.987632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.987763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.987793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.987920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.987950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.988114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.988142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.988263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.988289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.988394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.988423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.988529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.988555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.988708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.988739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.988866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.988893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.989075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.989103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.989196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.989225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.989349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.989393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.989525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.989554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.989660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.989689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.989845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.989874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.989990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.990016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.990120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.990147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.990262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.990289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.990412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.990439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.990571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.990616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.990744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.990773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.990881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.990908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.991062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.991122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.991241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.991267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.991390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.991416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.991583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.991612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.991741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.991770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.991901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.991930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.992031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.992081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.992180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.992219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.992318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.992348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.911 [2024-11-20 00:00:23.992477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.911 [2024-11-20 00:00:23.992505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.911 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.992675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.992705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.992861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.992891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.993025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.993054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.993181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.993209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.993335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.993361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.993457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.993484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.993658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.993687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.993808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.993849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.993984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.994014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.994157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.994184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.994300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.994326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.994451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.994493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.994599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.994628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.994750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.994779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.994950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.994976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.995092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.995119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.995233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.995259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.995345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.995371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.995456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.995482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.995605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.995634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.995796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.995825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.995955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.995984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.996139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.996178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.996283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.996312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.996437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.996481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.996588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.996620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.996731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.996761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.996910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.996938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.997087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.997116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.997241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.997268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.997411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.997440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.997598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.912 [2024-11-20 00:00:23.997627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.912 qpair failed and we were unable to recover it. 00:35:49.912 [2024-11-20 00:00:23.997771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.997816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.997920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.997949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.998082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.998127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.998251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.998277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.998404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.998433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.998533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.998564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.998760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.998806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.998927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.998953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.999078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.999107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.999231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.999258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.999344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.999372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.999490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.999517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.999638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.999666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.999752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.999778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:23.999940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:23.999980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.000123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.000156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.000294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.000325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.000431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.000461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.000572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.000602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.000729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.000758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.000902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.000930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.001059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.001102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.001271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.001316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.001452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.001482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.001635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.001665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.001822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.001861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.001953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.001981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.002081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.002110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.002230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.002257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.002375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.002401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.002526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.002553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.002676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.002703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.002799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.002833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.002959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.002985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.003186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.003213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.003366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.003396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.003556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.003585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.003730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.003761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.003913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.003942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.913 [2024-11-20 00:00:24.004044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.913 [2024-11-20 00:00:24.004082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.913 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.004227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.004253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.004368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.004415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.004526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.004571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.004710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.004753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.004870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.004897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.005021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.005048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.005191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.005223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.005345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.005374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.005519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.005563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.005671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.005715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.005823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.005854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.006009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.006036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.006170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.006197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.006297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.006324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.006501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.006531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.006726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.006756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.006856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.006885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.006989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.007019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.007168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.007195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.007331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.007390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.007525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.007558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.007721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.007751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.007906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.007936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.008065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.008119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.008240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.008266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.008374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.008403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.008564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.008593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.008728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.008773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.008929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.008961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.009119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.009147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.009269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.009296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.009430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.009460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.009629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.009658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.009802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.009832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.009982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.010013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.010140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.914 [2024-11-20 00:00:24.010169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.914 qpair failed and we were unable to recover it. 00:35:49.914 [2024-11-20 00:00:24.010336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.010386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.010521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.010551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.010709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.010753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.010875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.010902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.011047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.011080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.011202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.011229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.011348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.011393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.011517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.011561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.011722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.011752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.011875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.011905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.012043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.012091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.012251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.012280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.012417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.012466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.012610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.012655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.012804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.012849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.012979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.013019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.013167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.013196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.013289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.013317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.013456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.013484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.013665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.013694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.013830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.013860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.013965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.013992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.014136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.014163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.014312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.014360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.014468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.014511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.014618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.014647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.014787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.014816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.014942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.014981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.015106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.015136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.015285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.015312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.015450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.015494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.015609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.015636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.015796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.015824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.015921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.015950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.016042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.016078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.016202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.016229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.016335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.016364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.016501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.016530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.016629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.016658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.016759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.016788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.016947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.016976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.017117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.017145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.017235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.017261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.915 [2024-11-20 00:00:24.017376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.915 [2024-11-20 00:00:24.017402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.915 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.017553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.017582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.017704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.017747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.017879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.017921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.018078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.018105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.018228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.018254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.018370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.018397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.018507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.018542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.018736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.018765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.018920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.018950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.019058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.019118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.019235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.019261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.019382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.019424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.019565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.019595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.019751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.019780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.019895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.019921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.020021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.020047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.020167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.020207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.020349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.020396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.020494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.020523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.020660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.020690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.020856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.020901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.020995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.021022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.021173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.021205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.021340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.021369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.021467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.021496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.021591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.021621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.021749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.021778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.021876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.021904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.022046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.022080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.022198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.022225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.022335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.022365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.022468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.022495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.022616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.022643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.022767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.022800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.022922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.022949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.023064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.023106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.023227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.023254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.023398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.023425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.023538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.023565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.023659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.023685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.916 [2024-11-20 00:00:24.023828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.916 [2024-11-20 00:00:24.023874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.916 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.023996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.024022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.024179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.024227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.024391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.024421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.024612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.024656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.024808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.024836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.024932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.024960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.025058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.025093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.025181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.025207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.025356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.025383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.025499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.025525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.025616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.025642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.025744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.025773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.025927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.025955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.026086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.026129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.026221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.026248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.026388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.026417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.026553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.026583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.026743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.026789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.026943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.026970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.027079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.027113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.027283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.027327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.027443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.027487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.027631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.027661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.027854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.027885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.028042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.028080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.028252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.028279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.028381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.028409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.028551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.028594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.028754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.028783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.028887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.028931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.029083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.029110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.029201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.029228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.029336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.029365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.029499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.029528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.029668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.029697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.029833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.029862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.030016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.030046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.030203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.030229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.917 qpair failed and we were unable to recover it. 00:35:49.917 [2024-11-20 00:00:24.030365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.917 [2024-11-20 00:00:24.030411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.030579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.030624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.030721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.030748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.030895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.030921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.030999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.031026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.031175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.031220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.031308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.031335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.031504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.031552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.031720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.031769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.031890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.031916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.032076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.032103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.032220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.032250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.032411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.032455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.032621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.032666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.032760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.032786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.032904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.032932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.033052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.033103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.033241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.033270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.033387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.033413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.033582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.033611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.033771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.033800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.033915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.033941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.034034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.034060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.034185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.034212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.034313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.034343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.034480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.034508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.034605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.034634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.034754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.034783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.034903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.034932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.035094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.035137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.035267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.035296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.035418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.035447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.035582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.035612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.035769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.035816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.035916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.035943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.036075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.036107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.036204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.036231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.036356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.036383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.036526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.036571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.036731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.036759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.036903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.918 [2024-11-20 00:00:24.036931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.918 qpair failed and we were unable to recover it. 00:35:49.918 [2024-11-20 00:00:24.037034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.037060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.037204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.037233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.037329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.037359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.037462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.037491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.037600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.037646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.037803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.037848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.037993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.038020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.038166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.038196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.038311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.038338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.038456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.038485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.038641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.038684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.038802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.038830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.038979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.039006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.039137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.039166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.039288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.039315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.039410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.039436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.039527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.039553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.039643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.039669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.039785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.039811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.039930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.039958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.040080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.040109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.040206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.040239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.040377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.040421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.040524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.040553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.040688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.040715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.040830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.040857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.041008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.041035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.041178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.041223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.041354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.041400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.041495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.041523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.041613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.041640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.041760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.041788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.041885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.041912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.042000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.042028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.042174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.042216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.042310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.042337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.042462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.042488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.042617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.042647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.042737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.042767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.042893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.042923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.043059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.043096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.043238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.043283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.043459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.043504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.043673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.919 [2024-11-20 00:00:24.043717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.919 qpair failed and we were unable to recover it. 00:35:49.919 [2024-11-20 00:00:24.043833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.043859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.043982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.044009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.044100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.044129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.044306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.044355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.044498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.044543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.044724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.044773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.044868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.044895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.044992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.045021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.045163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.045191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.045326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.045355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.045456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.045487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.045644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.045674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.045782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.045811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.045915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.045944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.046056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.046092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.046236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.046280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.046433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.046478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.046619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.046669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.046754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.046781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.046878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.046905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.047018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.047045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.047169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.047196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.047298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.047324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.047413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.047439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.047551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.047578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.047674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.047701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.047815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.047842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.047967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.047993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.048113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.048141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.048258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.048285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.048378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.048406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.048498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.048526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.048650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.048678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.048769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.048795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.048899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.048926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.049028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.049054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.049178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.049222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.049331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.049360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.049489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.049518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.049648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.049676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.049814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.049843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.049958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.049986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.050125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.050170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.050311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.050355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.050451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.050482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.050599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.050626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.050728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.050755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.920 qpair failed and we were unable to recover it. 00:35:49.920 [2024-11-20 00:00:24.050844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.920 [2024-11-20 00:00:24.050872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.050998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.051024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.051157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.051184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.051276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.051302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.051389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.051415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.051524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.051553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.051684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.051713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.051874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.051903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.052065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.052118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.052289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.052318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.052444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.052473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.052578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.052607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.052734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.052763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.052887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.052915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.053016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.053045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.053193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.053221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.053392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.053436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.053547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.053577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.053699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.053728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.053842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.053868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.053981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.054008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.054136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.054165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.054283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.054309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.054434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.054460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.054576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.054607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.054697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.054739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.054839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.054868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.055005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.055032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.055151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.055177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.055289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.055315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.055535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.055565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.055706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.055735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.055836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.055865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.055998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.056038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.056183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.056212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.056380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.056425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.056558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.056604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.056747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.056792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.056930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.056958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.057080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.057126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.057284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.057313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.057416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.057445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.057543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.057572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.921 [2024-11-20 00:00:24.057699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.921 [2024-11-20 00:00:24.057728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.921 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.057863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.057892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.058033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.058061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.058161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.058188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.058326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.058355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.058485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.058516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.058651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.058678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.058776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.058802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.058948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.058980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.059102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.059130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.059253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.059281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.059399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.059426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.059538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.059565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.059687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.059716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.059864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.059892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.060023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.060049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.060208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.060234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.060326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.060352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.060476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.060502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.060629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.060655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.060759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.060788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.060933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.060960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.061089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.061117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.061260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.061290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.061418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.061447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.061573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.061602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.061726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.061755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.061862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.061891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.061997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.062028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.062158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.062188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.062362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.062408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.062553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.062601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.062747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.062792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.062908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.062935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.063034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.063062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.063195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.063225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.063356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.063386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.063516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.063546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.063649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.063678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.063772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.063801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.063894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.063923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.064095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.064122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.064259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.064303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.064418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.064448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.064602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.064647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.064817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.064865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.065023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.065050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.922 [2024-11-20 00:00:24.065190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.922 [2024-11-20 00:00:24.065234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.922 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.065412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.065442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.065612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.065666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.065787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.065815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.065932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.065959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.066062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.066097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.066228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.066257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.066389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.066418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.066525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.066554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.066678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.066708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.066827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.066856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.067019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.067048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.067221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.067251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.067375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.067404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.067514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.067542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.067725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.067772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.067922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.067949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.068052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.068088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.068187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.068214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.068343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.068388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.068533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.068580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.068709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.068740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.068845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.068874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.069038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.069067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.069208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.069237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.069370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.069415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.069578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.069608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.069711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.069740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.069837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.069866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.070005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.070035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.070167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.070205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.070308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.070338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.070497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.070543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.070711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.070754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.070881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.070908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.071039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.071066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.071218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.071262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.071397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.071441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.071536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.071563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.071678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.071708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.071839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.071869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.071979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.072009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.072133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.072176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.072306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.072335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.072475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.072505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.072632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.072660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.923 qpair failed and we were unable to recover it. 00:35:49.923 [2024-11-20 00:00:24.072785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.923 [2024-11-20 00:00:24.072815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.072944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.072973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.073110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.073138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.073308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.073357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.073492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.073536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.073703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.073747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.073840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.073867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.073964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.073992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.074087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.074115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.074231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.074261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.074348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.074375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.074513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.074542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.074696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.074725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.074813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.074842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.074980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.075008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.075166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.075211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.075318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.075348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.075533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.075578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.075699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.075726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.075851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.075878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.075997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.076025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.076173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.076203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.076330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.076360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.076466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.076495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.076624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.076653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.076782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.076811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.076970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.076998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.077159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.077189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.077345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.077390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.077532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.077576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.077740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.077785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.077936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.077963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.078103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.078134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.078261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.078290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.078398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.078428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.078540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.078569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.078690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.078724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.078832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.078861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.078967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.078995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.924 [2024-11-20 00:00:24.079141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.924 [2024-11-20 00:00:24.079172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.924 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.079336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.079381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.079523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.079551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.079673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.079718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.079852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.079878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.079999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.080027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.080142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.080171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.080339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.080368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.080552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.080582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.080684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.080712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.080848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.080877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.081018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.081045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.081152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.081179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.081302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.081328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.081529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.081559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.081659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.081688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.081791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.081820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.081927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.081956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.082084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.082114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.082251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.082281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.082460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.082517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.082669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.082716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.082856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.082899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.083022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.083049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.083211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.083244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.083371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.083399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.083519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.925 [2024-11-20 00:00:24.083546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.925 qpair failed and we were unable to recover it. 00:35:49.925 [2024-11-20 00:00:24.083640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.083667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.083760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.083787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.083929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.083956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.084052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.084087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.084183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.084210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.084349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.084376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.084486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.084513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.084608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.084634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.084722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.084749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.084868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.084897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.085047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.085081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.085182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.085210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.085356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.085383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.085534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.085561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.085659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.085685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.085833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.085859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.085978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.086005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.086124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.086155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.086285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.086314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.086423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.086452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.086606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.086635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.086787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.086814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.086907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.086935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.087021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.087048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.087223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.087263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.087398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.087444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.087593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.087623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.087726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.087758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.087890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.087920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.088046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.088098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.088192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.088219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.088346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.088373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.088523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.088553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.088675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.926 [2024-11-20 00:00:24.088719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.926 qpair failed and we were unable to recover it. 00:35:49.926 [2024-11-20 00:00:24.088850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.088893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.089053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.089090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.089240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.089267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.089409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.089440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.089615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.089645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.089759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.089803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.089967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.089996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.090115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.090142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.090259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.090286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.090377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.090404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.090543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.090572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.090696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.090725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.090880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.090909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.091031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.091060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.091193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.091223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.091329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.091355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.091495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.091524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.091667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.091726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.091862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.091891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.092013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.092041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.092186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.092232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.092341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.092372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.092520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.092564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.092691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.092719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.092846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.092873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.092979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.093019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.093181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.093208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.093302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.093329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.093449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.093478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.093609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.093638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.093755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.093786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.093938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.093964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.094082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.094109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.927 [2024-11-20 00:00:24.094261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.927 [2024-11-20 00:00:24.094287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.927 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.094419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.094466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.094638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.094687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.094825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.094872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.094975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.095002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.095093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.095122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.095303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.095348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.095478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.095508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.095641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.095688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.095818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.095845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.095982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.096010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.096163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.096208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.096313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.096345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.096480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.096510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.096617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.096646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.096750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.096779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.096888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.096914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.097062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.097095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.097244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.097271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.097415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.097444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.097572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.097602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.097735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.097765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.097874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.097903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.098028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.098054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.098187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.098219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.098359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.098390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.098546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.098577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.098740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.098785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.098912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.098939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.099045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.099101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.099234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.099263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.099405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.099435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.099571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.099598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.099718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.099745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.928 [2024-11-20 00:00:24.099859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.928 [2024-11-20 00:00:24.099887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.928 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.100004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.100032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.100140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.100167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.100303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.100348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.100460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.100489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.100641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.100686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.100773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.100800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.100896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.100922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.101040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.101078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.101175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.101203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.101321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.101348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.101472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.101499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.101617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.101644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.101772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.101800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.101893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.101919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.102039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.102065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.102200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.102226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.102351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.102379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.102473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.102499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.102656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.102683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.102768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.102794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.102908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.102934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.103035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.103064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.103198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.103229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.103356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.103401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.103537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.103581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.103719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.103763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.103880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.103907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.104050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.104088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.104189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.104215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.104294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.104321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.104442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.104469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.104586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.104631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.929 [2024-11-20 00:00:24.104788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.929 [2024-11-20 00:00:24.104818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.929 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.104922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.104950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.105078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.105106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.105241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.105285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.105424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.105468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.105638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.105669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.105828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.105855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.105976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.106003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.106133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.106160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.106258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.106285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.106404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.106435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.106560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.106587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.106702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.106731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.106889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.106919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.107049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.107087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.107252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.107279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.107398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.107427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.107559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.107588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.107720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.107749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.107854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.107880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.107979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.108005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.108138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.108178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.108324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.108356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.108494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.930 [2024-11-20 00:00:24.108523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.930 qpair failed and we were unable to recover it. 00:35:49.930 [2024-11-20 00:00:24.108627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.108656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.108840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.108886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.108983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.109011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.109098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.109148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.109242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.109271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.109407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.109437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.109531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.109560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.109726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.109755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.109890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.109920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.110025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.110051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.110189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.110217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.110319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.110346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.110498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.110542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.110663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.110693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.110837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.110866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.110997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.111024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.111120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.111148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.111232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.111258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.111382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.111425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.111536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.111565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.111679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.111705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.111840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.111868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.111964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.111994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.112152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.112180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.112335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.112375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.112484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.112515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.112616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.112646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.112775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.112809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.112934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.112964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.113078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.113108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.113222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.113248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.113350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.931 [2024-11-20 00:00:24.113379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.931 qpair failed and we were unable to recover it. 00:35:49.931 [2024-11-20 00:00:24.113516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.113545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.113667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.113696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.113807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.113857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.113996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.114024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.114168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.114198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.114318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.114359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.114534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.114562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.114690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.114718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.114866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.114893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.115024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.115052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.115232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.115262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.115356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.115385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.115548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.115577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.115700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.115759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.115872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.115900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.115992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.116019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.116123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.116150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.116281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.116311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.116415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.116444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.116578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.116609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.116747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.116781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.116953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.116981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.117140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.117192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.117337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.117381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.117527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.117570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.117737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.117767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.117928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.117959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.118056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.118093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.118200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.118227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.118353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.118380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.118499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.118526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.118646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.118673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.118792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.118819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.118940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.118966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.119060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.932 [2024-11-20 00:00:24.119104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.932 qpair failed and we were unable to recover it. 00:35:49.932 [2024-11-20 00:00:24.119229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.119258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.119365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.119395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.119496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.119526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.119633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.119662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.119770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.119799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.119903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.119935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.120109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.120137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.120281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.120327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.120460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.120506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.120646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.120677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.120846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.120872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.120996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.121024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.121142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.121172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.121306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.121335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.121519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.121553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.121665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.121693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.121801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.121830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.121963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.121991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.122084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.122113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.122222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.122252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.122420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.122450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.122633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.122678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.122826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.122871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.122962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.122989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.123113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.123140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.123264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.123291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.123431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.123461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.123587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.123617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.123755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.123784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.123891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.123917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.124132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.124171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.124302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.124331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.124460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.124487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.124661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.124691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.124797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.124826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.124950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.124980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.125123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.125151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.125243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.933 [2024-11-20 00:00:24.125270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.933 qpair failed and we were unable to recover it. 00:35:49.933 [2024-11-20 00:00:24.125376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.125415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.125573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.125620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.125757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.125803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.125923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.125950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.126065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.126098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.126250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.126276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.126426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.126454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.126591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.126620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.126751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.126779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.126895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.126921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.127006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.127032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.127168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.127195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.127323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.127358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.127479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.127508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.127657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.127687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.127846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.127875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.128037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.128066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.128195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.128240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.128398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.128427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.128524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.128553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.128685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.128714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.128831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.128880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.129045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.129079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.129205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.129233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.129345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.129391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.129527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.129570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.129675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.129703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.129819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.129846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.129938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.129964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.130094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.130122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.130248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.130276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.934 [2024-11-20 00:00:24.130396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.934 [2024-11-20 00:00:24.130422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.934 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.130544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.130570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.130691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.130717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.130862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.130888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.131010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.131037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.131178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.131207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.131317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.131361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.131506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.131539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.131677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.131707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.131837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.131867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.132000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.132030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.132177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.132205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.132304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.132330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.132507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.132551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.132729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.132759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.132917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.132946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.133084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.133129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.133226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.133253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.133336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.133364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.133467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.133513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.133609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.133639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.133739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.133770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.935 [2024-11-20 00:00:24.133874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.935 [2024-11-20 00:00:24.133904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.935 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.134016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.134043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.134176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.134203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.134323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.134350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.134516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.134546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.134674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.134702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.134855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.134884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.134999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.135026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.135230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.135257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.135365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.135392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.135560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.135591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.135749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.135778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.135886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.135915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.136049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.136082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.136228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.136255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.136436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.136465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.136623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.136653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.136781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.136815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.136920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.136950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.137057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.137091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.137195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.137222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.137308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.137336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.137458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.137485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.137570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.137596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.137693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.137739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.137852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.137897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.138038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.138066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.138194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.138221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.138311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.138337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.138451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.138481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.138623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.138654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.138819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.138848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.139003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.139032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.139158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.139184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.936 [2024-11-20 00:00:24.139317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.936 [2024-11-20 00:00:24.139356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.936 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.139519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.139548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.139682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.139727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.139859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.139905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.140013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.140052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.140205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.140235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.140360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.140387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.140519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.140548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.140683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.140712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.140849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.140877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.141023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.141050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.141180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.141208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.141301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.141328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.141453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.141480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.141568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.141596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.141710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.141739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.141867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.141910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.142060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.142094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.142208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.142234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.142357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.142385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.142484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.142530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.142689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.142719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.142847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.142877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.142991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.143026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.143146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.143173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.143277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.143308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.143447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.143477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.143575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.143604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.143770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.143827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.143937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.143967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.144094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.144133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.144259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.144304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.144449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.144493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.144597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.144628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.144770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.144801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.144929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.144973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.145059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.145094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.145247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.145274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.937 qpair failed and we were unable to recover it. 00:35:49.937 [2024-11-20 00:00:24.145434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.937 [2024-11-20 00:00:24.145463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.145582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.145628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.145749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.145775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.145905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.145936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.146075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.146119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.146212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.146239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.146322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.146365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.146577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.146607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.146767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.146796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.146904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.146934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.147046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.147079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.147171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.147198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.147306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.147332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.147478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.147507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.147637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.147667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.147769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.147799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.147910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.147937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.148030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.148056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.148146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.148173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.148262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.148289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.148403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.148430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.148567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.148597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.148722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.148751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.148847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.148876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.148991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.149034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.149130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.149172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.149286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.149312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.149500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.149543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.149726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.149754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.149860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.149889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.150020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.150050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.150188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.150215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.150337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.150363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.150508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.150551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.150689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.150720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.150845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.150874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.151011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.151042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.151199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.938 [2024-11-20 00:00:24.151238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.938 qpair failed and we were unable to recover it. 00:35:49.938 [2024-11-20 00:00:24.151392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.151437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.151583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.151615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.151751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.151781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.151884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.151914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.152046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.152087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.152199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.152226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.152350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.152376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.152494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.152539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.152642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.152671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.152799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.152828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.152952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.152981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.153084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.153129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.153245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.153271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.153390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.153416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.153573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.153607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.153717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.153760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.153885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.153914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.154082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.154127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.154249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.154275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.154433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.154459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.154616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.154645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.154802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.154831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.154958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.154987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.155106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.155134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.155232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.155258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.155400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.155429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.155560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.155590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.155685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.155715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.155851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.155879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.155980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.156021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.156166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.156193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.156287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.156313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.156405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.156449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.156548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.156577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.156715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.156743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.156868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.156896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.157021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.157047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.157149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.939 [2024-11-20 00:00:24.157175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.939 qpair failed and we were unable to recover it. 00:35:49.939 [2024-11-20 00:00:24.157326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.157352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.157536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.157592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.157740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.157788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.157935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.157970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.158119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.158147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.158289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.158338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.158477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.158520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.158607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.158634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.158782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.158828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.158913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.158940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.159056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.159094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.159223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.159250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.159363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.159393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.159521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.159551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.159689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.159717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.159833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.159863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.160002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.160029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.160220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.160267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.160406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.160452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.160584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.160629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.160724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.160751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.160875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.160902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.160987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.161014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.161133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.161160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.161295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.161322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.161443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.161470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.161595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.161621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.161738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.161766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.161866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.161894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.162010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.162036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.162158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.162190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.162330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.162359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.162465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.940 [2024-11-20 00:00:24.162494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.940 qpair failed and we were unable to recover it. 00:35:49.940 [2024-11-20 00:00:24.162620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.162649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.162780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.162811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.162947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.162974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.163121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.163152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.163259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.163285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.163453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.163497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.163637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.163666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.163778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.163806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.163949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.163975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.164123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.164150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.164284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.164313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.164477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.164506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.164605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.164635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.164821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.164867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.164989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.165016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.165133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.165159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.165296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.165341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.165474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.165519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.165653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.165698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.165788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.165815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.165912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.165940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.166066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.166098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.166251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.166278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.166454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.166481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.166583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.166611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.166771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.166798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.166885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.166913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.167046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.167079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.167225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.167255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.167417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.167445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.167557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.167601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.167724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.167750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.167888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.167915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.168013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.168039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.168133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.168160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.168285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.168311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.168429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.168456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.168575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.941 [2024-11-20 00:00:24.168606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.941 qpair failed and we were unable to recover it. 00:35:49.941 [2024-11-20 00:00:24.168728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.942 [2024-11-20 00:00:24.168755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.942 qpair failed and we were unable to recover it. 00:35:49.942 [2024-11-20 00:00:24.168889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.942 [2024-11-20 00:00:24.168916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:49.942 qpair failed and we were unable to recover it. 00:35:49.942 [2024-11-20 00:00:24.169040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.942 [2024-11-20 00:00:24.169081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.942 qpair failed and we were unable to recover it. 00:35:49.942 [2024-11-20 00:00:24.169176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.942 [2024-11-20 00:00:24.169203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.942 qpair failed and we were unable to recover it. 00:35:49.942 [2024-11-20 00:00:24.169323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.942 [2024-11-20 00:00:24.169349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.942 qpair failed and we were unable to recover it. 00:35:49.942 [2024-11-20 00:00:24.169463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.942 [2024-11-20 00:00:24.169492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.942 qpair failed and we were unable to recover it. 00:35:49.942 [2024-11-20 00:00:24.169588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.942 [2024-11-20 00:00:24.169618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.942 qpair failed and we were unable to recover it. 00:35:49.942 [2024-11-20 00:00:24.169736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.942 [2024-11-20 00:00:24.169765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.942 qpair failed and we were unable to recover it. 00:35:49.942 [2024-11-20 00:00:24.169865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.942 [2024-11-20 00:00:24.169894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.942 qpair failed and we were unable to recover it. 00:35:49.942 [2024-11-20 00:00:24.170017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.942 [2024-11-20 00:00:24.170047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.942 qpair failed and we were unable to recover it. 00:35:49.942 [2024-11-20 00:00:24.170175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.942 [2024-11-20 00:00:24.170201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.942 qpair failed and we were unable to recover it. 00:35:49.942 [2024-11-20 00:00:24.170339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:49.942 [2024-11-20 00:00:24.170370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:49.942 qpair failed and we were unable to recover it. 00:35:50.229 [2024-11-20 00:00:24.170540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.229 [2024-11-20 00:00:24.170571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.229 qpair failed and we were unable to recover it. 00:35:50.229 [2024-11-20 00:00:24.170703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.229 [2024-11-20 00:00:24.170733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.229 qpair failed and we were unable to recover it. 00:35:50.229 [2024-11-20 00:00:24.170850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.229 [2024-11-20 00:00:24.170898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.229 qpair failed and we were unable to recover it. 00:35:50.229 [2024-11-20 00:00:24.171083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.229 [2024-11-20 00:00:24.171112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.229 qpair failed and we were unable to recover it. 00:35:50.229 [2024-11-20 00:00:24.171278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.229 [2024-11-20 00:00:24.171323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.229 qpair failed and we were unable to recover it. 00:35:50.229 [2024-11-20 00:00:24.171459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.229 [2024-11-20 00:00:24.171504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.229 qpair failed and we were unable to recover it. 00:35:50.229 [2024-11-20 00:00:24.171638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.229 [2024-11-20 00:00:24.171683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.229 qpair failed and we were unable to recover it. 00:35:50.229 [2024-11-20 00:00:24.171799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.229 [2024-11-20 00:00:24.171825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.229 qpair failed and we were unable to recover it. 00:35:50.229 [2024-11-20 00:00:24.171907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.229 [2024-11-20 00:00:24.171936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.229 qpair failed and we were unable to recover it. 00:35:50.229 [2024-11-20 00:00:24.172057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.229 [2024-11-20 00:00:24.172092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.229 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.172233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.172259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.172422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.172452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.172581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.172611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.172736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.172765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.172893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.172928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.173033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.173063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.173219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.173245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.173396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.173425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.173535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.173564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.173691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.173720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.173826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.173856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.173951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.173979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.174082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.174126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.174273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.174300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.174390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.174432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.174593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.174623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.174724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.174753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.175051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.175104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.175225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.175251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.175401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.175428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.175564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.175593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.175700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.175729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.175842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.175883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.176000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.176041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.176141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.176168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.176306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.176335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.176495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.176524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.176628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.176657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.176855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.176912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.177038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.177067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.177204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.177232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.177347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.177397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.177519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.177547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.177660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.177687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.177780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.177808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.177932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.177958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.178053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.178088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.230 [2024-11-20 00:00:24.178184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.230 [2024-11-20 00:00:24.178210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.230 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.178309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.178335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.178447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.178473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.178574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.178601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.178729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.178758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.178904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.178931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.179024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.179050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.179179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.179207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.179325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.179352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.179550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.179578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.179705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.179732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.179859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.179885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.179980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.180007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.180103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.180131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.180271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.180316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.180514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.180543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.180705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.180732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.180850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.180879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.180962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.180989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.181114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.181141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.181230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.181256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.181352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.181383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.181531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.181558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.181703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.181730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.181886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.181914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.182004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.182030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.182166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.182209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.182352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.182385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.182499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.182529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.182661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.182692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.182833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.182862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.182985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.183014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.183140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.183184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.183351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.183383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.183489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.183519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.183705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.183734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.183829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.183858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.231 [2024-11-20 00:00:24.183959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.231 [2024-11-20 00:00:24.184003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.231 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.184148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.184174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.184284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.184311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.184453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.184482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.184617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.184660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.184815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.184844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.184974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.185004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.185150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.185190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.185330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.185363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.185499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.185528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.185660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.185689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.185809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.185848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.185987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.186016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.186117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.186161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.186297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.186327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.186478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.186524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.186614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.186641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.186750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.186780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.186929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.186958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.187123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.187150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.187314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.187341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.187447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.187474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.187576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.187603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.187695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.187723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.187809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.187836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.187957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.187983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.188102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.188130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.188278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.188304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.188429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.188456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.188543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.188572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.188720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.188748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.188838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.188865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.188984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.189011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.189216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.189245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.189374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.189431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.189563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.189594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.189735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.189767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.189901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.189928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.232 [2024-11-20 00:00:24.190057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.232 [2024-11-20 00:00:24.190093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.232 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.190209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.190239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.190328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.190357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.190515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.190545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.190647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.190676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.190808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.190837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.190926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.190968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.191056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.191089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.191209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.191236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.191394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.191424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.191530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.191572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.191675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.191704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.191838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.191867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.192015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.192041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.192200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.192227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.192309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.192335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.192490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.192517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.192660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.192689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.192797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.192826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.192990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.193019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.193162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.193189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.193277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.193321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.193450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.193492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.193597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.193626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.193756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.193785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.193945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.193974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.194111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.194139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.194266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.194305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.194467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.194498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.194696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.194726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.194849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.194877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.195050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.195086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.195207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.195234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.195322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.195349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.195473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.195502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.195653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.195679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.195822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.195853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.195987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.196030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.233 [2024-11-20 00:00:24.196172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.233 [2024-11-20 00:00:24.196199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.233 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.196318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.196359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.196471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.196500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.196666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.196696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.196847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.196875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.196993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.197023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.197135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.197162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.197287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.197313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.197463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.197489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.197578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.197604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.197745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.197774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.197915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.197943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.198135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.198166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.198260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.198288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.198404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.198431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.198575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.198604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.198810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.198866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.198995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.199023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.199133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.199163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.199306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.199352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.199557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.199602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.199751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.199796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.199943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.199970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.200077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.200105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.200191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.200217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.200360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.200405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.200542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.200586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.200721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.200767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.200868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.200896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.201018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.201048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.201178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.201205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.201296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.201323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.234 qpair failed and we were unable to recover it. 00:35:50.234 [2024-11-20 00:00:24.201448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.234 [2024-11-20 00:00:24.201474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.201623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.201652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.201750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.201780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.201886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.201915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.202028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.202054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.202209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.202235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.202329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.202372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.202471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.202501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.202611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.202637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.202750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.202778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.202917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.202947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.203048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.203087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.203222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.203249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.203366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.203395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.203517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.203546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.203644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.203674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.203829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.203858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.204015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.204043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.204179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.204205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.204300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.204327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.204455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.204484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.204641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.204670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.204796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.204826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.204940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.204968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.205112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.205139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.205234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.205261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.205396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.205426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.205559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.205588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.205682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.205710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.205835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.205865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.205968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.205997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.206096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.206138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.206261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.206288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.206450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.206479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.206579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.206608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.206704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.206734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.206908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.206950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.207078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.207104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.235 [2024-11-20 00:00:24.207271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.235 [2024-11-20 00:00:24.207297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.235 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.207434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.207462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.207578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.207620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.207722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.207751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.207883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.207913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.208084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.208129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.208254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.208279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.208379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.208407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.208546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.208575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.208711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.208740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.208894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.208923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.209058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.209097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.209231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.209257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.209423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.209452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.209572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.209614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.209773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.209801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.209906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.209935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.210057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.210092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.210186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.210212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.210321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.210350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.210450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.210480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.210609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.210637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.210764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.210793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.210919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.210948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.211122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.211150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.211293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.211322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.211468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.211527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.211643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.211697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.211840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.211869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.211998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.212024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.212118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.212146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.212293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.212337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.212489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.212533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.212627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.212655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.212756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.212783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.212877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.212903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.212999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.213026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.213167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.213197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.236 [2024-11-20 00:00:24.213358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.236 [2024-11-20 00:00:24.213387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.236 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.213525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.213554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.213692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.213721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.213885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.213914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.214052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.214087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.214237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.214263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.214442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.214472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.214614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.214656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.214795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.214824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.214948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.214977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.215084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.215127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.215260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.215286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.215398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.215427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.215555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.215584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.215715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.215744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.215912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.215941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.216053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.216102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.216224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.216250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.216415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.216444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.216598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.216627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.216830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.216859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.216975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.217001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.217132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.217159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.217268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.217294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.217431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.217461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.217600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.217629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.217765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.217794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.217928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.217958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.218096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.218123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.218221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.218247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.218394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.218424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.218524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.218553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.218658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.218701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.218814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.218840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.218965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.218995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.219145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.219172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.219263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.219290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.219400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.219430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.219564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.237 [2024-11-20 00:00:24.219593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.237 qpair failed and we were unable to recover it. 00:35:50.237 [2024-11-20 00:00:24.219749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.219778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.219951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.220008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.220165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.220194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.220284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.220313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.220455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.220506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.220616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.220646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.220752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.220780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.220928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.220954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.221076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.221104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.221205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.221231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.221335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.221364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.221502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.221530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.221618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.221645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.221731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.221758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.221854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.221880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.222017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.222045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.222198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.222226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.222340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.222368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.222498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.222524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.222641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.222668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.222759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.222785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.222868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.222895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.222981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.223008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.223097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.223125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.223212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.223239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.223347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.223376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.223499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.223527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.223688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.223717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.223849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.223878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.224017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.224046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.224143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.224170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.224276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.224307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.224404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.224432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.238 qpair failed and we were unable to recover it. 00:35:50.238 [2024-11-20 00:00:24.224567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.238 [2024-11-20 00:00:24.224613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.224781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.224811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.224951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.224978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.225109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.225137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.225252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.225279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.225369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.225396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.225487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.225514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.225609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.225636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.225756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.225783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.225874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.225900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.226020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.226047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.226152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.226179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.226268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.226295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.226384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.226411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.226534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.226560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.226673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.226703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.226864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.226890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.226974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.227000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.227088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.227116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.227233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.227259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.227374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.227403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.227530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.227559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.227720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.227749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.227882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.227911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.228058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.228093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.228138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa970 (9): Bad file descriptor 00:35:50.239 [2024-11-20 00:00:24.228360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.228398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.228568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.228613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.228775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.228805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.228909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.228954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.229098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.229141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.229365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.229396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.229596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.229626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.229764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.229796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.229934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.229963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.230096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.230123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.230221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.230248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.230397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.239 [2024-11-20 00:00:24.230424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.239 qpair failed and we were unable to recover it. 00:35:50.239 [2024-11-20 00:00:24.230534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.230563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.230700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.230730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.230827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.230858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.231022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.231049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.231170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.231198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.231345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.231372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.231489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.231519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.231678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.231708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.231831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.231861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.231989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.232019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.232146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.232175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.232278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.232305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.232456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.232518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.232697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.232744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.232912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.232966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.233086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.233115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.233230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.233260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.233418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.233464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.233580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.233625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.233748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.233775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.233932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.233971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.234094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.234122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.234251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.234278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.234428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.234457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.234592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.234622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.234718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.234747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.234887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.234915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.235041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.235092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.235234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.235263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.235380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.235410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.235569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.235598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.235734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.235766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.235860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.235889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.236024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.236051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.236160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.236188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.236334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.236361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.236506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.236536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.236665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.240 [2024-11-20 00:00:24.236694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.240 qpair failed and we were unable to recover it. 00:35:50.240 [2024-11-20 00:00:24.236816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.236846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.236977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.237017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.237187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.237217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.237326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.237364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.237554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.237599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.237724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.237752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.237883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.237911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.238002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.238030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.238185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.238215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.238311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.238341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.238441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.238470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.238575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.238604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.238733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.238762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.238929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.238956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.239095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.239123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.239224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.239250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.239389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.239420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.239553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.239584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.239708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.239737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.239885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.239918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.240047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.240081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.240201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.240229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.240313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.240340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.240544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.240593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.240729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.240772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.240859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.240886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.240989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.241016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.241166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.241194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.241287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.241315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.241409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.241435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.241536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.241564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.241684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.241711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.241814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.241841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.241986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.242014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.242120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.242148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.242242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.242270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.242390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.242417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.242555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.241 [2024-11-20 00:00:24.242585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.241 qpair failed and we were unable to recover it. 00:35:50.241 [2024-11-20 00:00:24.242710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.242741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.242858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.242884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.242992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.243018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.243117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.243161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.243296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.243325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.243427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.243457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.243592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.243621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.243749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.243778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.243902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.243945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.244060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.244096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.244224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.244250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.244355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.244384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.244504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.244533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.244688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.244717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.244850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.244880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.245044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.245081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.245245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.245271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.245411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.245461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.245606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.245636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.245801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.245847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.245973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.246000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.246108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.246137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.246235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.246266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.246424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.246455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.246599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.246628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.246723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.246753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.246850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.246876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.246975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.247002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.247143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.247170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.247259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.247285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.247381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.247408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.247502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.247529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.247671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.247717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.247849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.247877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.248008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.248048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.248150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.248177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.248292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.248318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.248458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.242 [2024-11-20 00:00:24.248487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.242 qpair failed and we were unable to recover it. 00:35:50.242 [2024-11-20 00:00:24.248599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.248629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.248727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.248756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.248877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.248905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.249056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.249122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.249292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.249321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.249457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.249500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.249620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.249649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.249759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.249789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.249968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.250008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.250137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.250165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.250296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.250323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.250468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.250497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.250693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.250722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.250856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.250885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.250985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.251014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.251178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.251205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.251307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.251332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.251450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.251477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.251615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.251644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.251801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.251830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.251959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.251989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.252127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.252154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.252278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.252304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.252418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.252461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.252557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.252588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.252723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.252753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.252914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.252944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.253099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.253148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.253317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.253350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.253508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.253552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.253721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.253750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.253897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.253923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.254030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.254082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.254205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.243 [2024-11-20 00:00:24.254233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.243 qpair failed and we were unable to recover it. 00:35:50.243 [2024-11-20 00:00:24.254340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.254368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.254470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.254498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.254619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.254648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.254770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.254796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.254915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.254943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.255040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.255067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.255168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.255195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.255304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.255362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.255561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.255594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.255812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.255843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.255954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.255983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.256096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.256126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.256265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.256294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.256443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.256473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.256608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.256659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.256810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.256836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.256929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.256956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.257051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.257085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.257278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.257307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.257417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.257447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.257543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.257572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.257705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.257735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.257881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.257909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.258036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.258084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.258217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.258245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.258365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.258397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.258503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.258530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.258707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.258737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.258867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.258898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.259052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.259099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.259207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.259236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.259367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.259413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.259616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.259660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.259812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.259857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.259950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.259978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.260092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.260119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.260256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.260284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.260429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.244 [2024-11-20 00:00:24.260459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.244 qpair failed and we were unable to recover it. 00:35:50.244 [2024-11-20 00:00:24.260549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.260578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.260739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.260769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.260876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.260907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.261062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.261137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.261243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.261272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.261413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.261459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.261607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.261652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.261763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.261793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.261931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.261958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.262099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.262159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.262306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.262338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.262470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.262501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.262607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.262636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.262772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.262801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.262929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.262959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.263100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.263128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.263280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.263307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.263435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.263463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.263598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.263629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.263856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.263899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.264016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.264044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.264209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.264240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.264352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.264382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.264509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.264540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.264670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.264700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.264833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.264863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.265020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.265049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.265161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.265204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.265328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.265359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.265461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.265491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.265627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.265658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.265791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.265820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.265977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.266008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.266164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.266190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.266348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.266378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.266509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.266539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.266670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.266698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.245 [2024-11-20 00:00:24.266838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.245 [2024-11-20 00:00:24.266882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.245 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.267041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.267076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.267208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.267235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.267366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.267409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.267535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.267564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.267767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.267796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.267957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.267993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.268137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.268165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.268263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.268290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.268380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.268406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.268516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.268546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.268666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.268693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.268825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.268856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.268991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.269021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.269164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.269191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.269321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.269347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.269467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.269495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.269627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.269657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.269778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.269805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.269967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.269994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.270119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.270146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.270274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.270301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.270412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.270442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.270557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.270599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.270758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.270788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.270880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.270910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.271065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.271112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.271226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.271256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.271379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.271406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.271514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.271545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.271657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.271683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.271815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.271854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.271950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.271978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.272083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.272112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.272233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.272259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.272347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.272374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.272492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.272518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.272652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.272682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.246 [2024-11-20 00:00:24.272813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.246 [2024-11-20 00:00:24.272842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.246 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.272973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.273003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.273134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.273161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.273257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.273284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.273457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.273486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.273615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.273658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.273756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.273786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.273946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.273976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.274132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.274172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.274306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.274335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.274471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.274518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.274684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.274730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.274876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.274920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.275064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.275103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.275269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.275297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.275414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.275443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.275582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.275613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.275730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.275756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.275898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.275941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.276075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.276103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.276250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.276280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.276385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.276414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.276564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.276594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.276731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.276760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.276920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.276968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.277064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.277099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.277188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.277216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.277360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.277404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.277547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.277593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.277731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.277762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.277917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.277946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.278082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.278130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.278239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.278268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.278371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.278399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.278529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.278558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.278706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.278759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.278907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.278938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.279084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.279133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.247 [2024-11-20 00:00:24.279250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.247 [2024-11-20 00:00:24.279277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.247 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.279424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.279453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.279569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.279601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.279734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.279763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.279894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.279925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.280062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.280116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.280212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.280238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.280345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.280376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.280483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.280512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.280627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.280659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.280817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.280846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.280953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.280996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.281126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.281153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.281264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.281293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.281463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.281493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.281658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.281687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.281803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.281831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.281961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.281990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.282102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.282128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.282214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.282242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.282351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.282381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.282511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.282541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.282657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.282686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.282811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.282841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.282986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.283019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.283115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.283142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.283281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.283326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.283459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.283487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.283604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.283631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.283729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.283756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.283875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.283902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.284036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.284063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.284169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.284197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.284284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.248 [2024-11-20 00:00:24.284310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.248 qpair failed and we were unable to recover it. 00:35:50.248 [2024-11-20 00:00:24.284429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.284455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.284578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.284608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.284709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.284735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.284868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.284899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.285043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.285076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.285190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.285217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.285354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.285399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.285566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.285596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.285745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.285789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.285923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.285951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.286042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.286077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.286170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.286196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.286333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.286363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.286565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.286593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.286765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.286820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.286956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.286985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.287082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.287109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.287229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.287262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.287373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.287418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.287553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.287583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.287723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.287749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.287869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.287897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.288046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.288080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.288173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.288199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.288290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.288317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.288464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.288491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.288615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.288642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.288763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.288791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.288941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.288967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.289169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.289215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.289362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.289406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.289551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.289596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.289705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.289736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.289851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.289878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.289999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.290025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.290178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.290210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.290339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.290368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.249 qpair failed and we were unable to recover it. 00:35:50.249 [2024-11-20 00:00:24.290504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.249 [2024-11-20 00:00:24.290534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.290629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.290657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.290754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.290783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.290924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.290951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.291047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.291099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.291196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.291224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.291340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.291367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.291519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.291553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.291675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.291720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.291858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.291888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.291986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.292014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.292158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.292185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.292275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.292319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.292450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.292479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.292598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.292627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.292762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.292792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.292919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.292950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.293116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.293156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.293370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.293415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.293561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.293607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.293723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.293770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.293924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.293952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.294082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.294135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.294255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.294281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.294439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.294469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.294571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.294600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.294730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.294761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.294908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.294934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.295025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.295052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.295196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.295235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.295385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.295418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.295535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.295582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.295681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.295712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.295855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.295886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.296011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.296050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.296161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.296188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.296308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.296337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.296494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.296523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.250 [2024-11-20 00:00:24.296649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.250 [2024-11-20 00:00:24.296677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.250 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.296781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.296810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.296920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.296948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.297066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.297100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.297207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.297237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.297368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.297398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.297532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.297562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.297655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.297685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.297794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.297823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.297941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.297972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.298100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.298129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.298335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.298381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.298548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.298592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.298727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.298773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.298900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.298927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.299048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.299091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.299214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.299241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.299359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.299386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.299519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.299556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.299754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.299781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.299979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.300006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.300142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.300187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.300364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.300408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.300560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.300611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.300735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.300761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.300876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.300904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.301122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.301158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.301320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.301347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.301482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.301528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.301679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.301710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.301870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.301897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.302023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.302050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.302193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.302220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.302325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.302357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.302475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.302501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.302621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.302648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.302743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.302775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.302863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.302890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.251 [2024-11-20 00:00:24.303011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.251 [2024-11-20 00:00:24.303037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.251 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.303143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.303172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.303304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.303349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.303489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.303519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.303674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.303719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.303840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.303867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.304079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.304106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.304245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.304291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.304386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.304414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.304543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.304570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.304691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.304719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.304866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.304894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.305017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.305044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.305175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.305204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.305373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.305403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.305508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.305535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.305654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.305681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.305777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.305805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.305905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.305932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.306030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.306057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.306149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.306176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.306306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.306333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.306455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.306482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.306602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.306629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.306750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.306778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.306898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.306926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.307045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.307077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.307200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.307242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.307396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.307426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.307557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.307588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.307711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.307739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.307864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.307906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.307997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.308023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.308156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.308183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.252 qpair failed and we were unable to recover it. 00:35:50.252 [2024-11-20 00:00:24.308320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.252 [2024-11-20 00:00:24.308349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.308444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.308473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.308575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.308603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.308755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.308783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.308914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.308942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.309344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.309391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.309522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.309552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.309718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.309748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.309890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.309917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.310031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.310060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.310188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.310214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.311017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.311052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.311200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.311227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.311321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.311367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.311512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.311538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.311685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.311715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.311817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.311846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.311935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.311964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.312120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.312152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.312276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.312304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.312431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.312460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.312596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.312625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.312723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.312752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.312849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.312878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.312990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.313016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.313114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.313142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.313238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.313264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.313359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.313385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.313484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.313510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.313615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.313644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.313837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.313866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.313998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.314027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.314159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.314195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.314302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.314340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.314448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.314495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.314614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.314657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.314760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.314791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.314923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.314952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.315100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.315139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.315258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.253 [2024-11-20 00:00:24.315284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.253 qpair failed and we were unable to recover it. 00:35:50.253 [2024-11-20 00:00:24.315384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.315411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.315526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.315556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.315672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.315718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.315817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.315846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.315954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.315982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.316114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.316160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.316261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.316290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.316425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.316470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.316605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.316650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.316788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.316817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.316926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.316953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.317051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.317085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.317215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.317242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.317364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.317398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.317499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.317526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.317729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.317756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.317847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.317882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.318032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.318062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.318189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.318216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.318314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.318341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.318444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.318488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.318657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.318703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.318823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.318851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.318970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.318999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.319093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.319121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.319258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.319287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.319385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.319415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.319514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.319543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.319651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.319677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.319764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.319789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.319937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.319964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.320092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.320137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.320272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.320304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.320464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.320494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.320702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.320731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.320835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.320866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.320995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.321025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.321251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.321281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.321400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.321444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.321617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.321663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.321807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.254 [2024-11-20 00:00:24.321837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.254 qpair failed and we were unable to recover it. 00:35:50.254 [2024-11-20 00:00:24.321972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.321999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.322095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.322122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.322226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.322256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.322396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.322427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.322526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.322556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.322665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.322696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.322829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.322860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.322972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.323001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.323089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.323140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.323244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.323274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.323415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.323446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.323542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.323572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.323669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.323699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.323829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.323858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.323993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.324022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.324190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.324219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.324334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.324364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.324524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.324568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.324713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.324743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.324964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.324992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.325082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.325118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.325241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.325286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.325395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.325425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.325606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.325641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.325801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.325828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.325922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.325949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.326047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.326109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.326252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.326283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.326394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.326424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.326563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.326593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.326723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.326752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.326856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.326892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.327038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.327066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.327204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.327231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.327350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.327395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.327507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.327552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.327641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.327669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.327798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.327826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.327951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.327979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.328145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.328185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.328283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.255 [2024-11-20 00:00:24.328311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.255 qpair failed and we were unable to recover it. 00:35:50.255 [2024-11-20 00:00:24.328406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.328433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.328576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.328606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.328741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.328770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.328900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.328930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.329066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.329121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.329220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.329247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.329385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.329428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.329572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.329602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.329704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.329733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.329868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.329897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.330029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.330058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.330215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.330241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.330411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.330441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.330567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.330597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.330734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.330763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.330924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.330971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.331117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.331145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.331279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.331341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.331481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.331529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.331618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.331645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.331793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.331820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.331920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.331948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.332080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.332108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.332234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.332260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.332375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.332404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.332506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.332535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.332665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.332695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.332861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.332907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.333030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.333058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.333210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.333256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.333401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.333445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.333562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.333607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.333724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.333754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.333921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.333947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.334076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.334112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.334209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.334236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.334356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.334382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.334476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.334503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.334620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.334647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.334743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.256 [2024-11-20 00:00:24.334771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.256 qpair failed and we were unable to recover it. 00:35:50.256 [2024-11-20 00:00:24.334895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.334922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.335048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.335082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.335172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.335200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.335309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.335336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.335458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.335484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.335579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.335606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.335734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.335761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.335854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.335880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.335982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.336009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.336118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.336146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.336267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.336295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.336423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.336450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.336583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.336611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.336727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.336753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.336865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.336892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.337009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.337036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.337153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.337180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.337297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.337339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.337437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.337465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.337561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.337588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.337703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.337730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.337820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.337848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.337973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.338000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.338136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.338170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.338292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.338321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.338442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.338470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.338555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.338582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.338670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.338696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.338794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.338820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.338936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.338963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.339054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.339118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.339275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.339306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.339437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.339468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.339638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.339682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.339803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.339830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.339918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.339945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.340080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.340137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.257 qpair failed and we were unable to recover it. 00:35:50.257 [2024-11-20 00:00:24.340240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.257 [2024-11-20 00:00:24.340269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.340399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.340428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.340527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.340555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.340688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.340718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.340822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.340854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.341087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.341124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.341247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.341275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.341417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.341473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.341618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.341646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.341762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.341790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.341886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.341914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.342036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.342062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.342209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.342236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.342378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.342405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.342548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.342576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.342730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.342756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.342849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.342876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.342998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.343025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.343142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.343172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.343283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.343325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.343461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.343488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.343593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.343621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.343716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.343743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.343859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.343885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.343992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.344020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.344162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.344189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.344282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.344309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.344440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.344468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.344573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.344599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.344698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.344726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.344822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.344849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.344948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.344975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.345092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.345124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.345240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.345266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.345385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.345415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.345507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.345534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.345686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.345712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.345797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.345823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.345914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.345941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.346074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.346101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.346190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.346216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.346308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.346334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.346426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.346451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.258 [2024-11-20 00:00:24.346598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.258 [2024-11-20 00:00:24.346624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.258 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.346702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.346730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.346832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.346858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.346980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.347006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.347110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.347137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.347258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.347283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.347370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.347397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.347515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.347542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.347666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.347692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.347781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.347807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.347891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.347918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.348016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.348043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.348198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.348225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.348338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.348365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.348453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.348479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.348606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.348632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.348777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.348804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.348895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.348922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.349007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.349037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.349728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.349759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.349884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.349910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.350010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.350037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.350140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.350167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.350256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.350282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.350398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.350428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.350596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.350626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.350721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.350751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.350876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.350920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.351006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.351032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.351127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.351154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.351274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.351300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.351425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.351451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.351588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.351617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.351734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.351764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.351918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.351946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.352063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.352099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.352197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.352223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.352370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.352396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.352512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.352557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.352662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.352691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.352806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.352848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.352985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.353014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.353172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.353199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.353315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.353341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.353449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.259 [2024-11-20 00:00:24.353475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.259 qpair failed and we were unable to recover it. 00:35:50.259 [2024-11-20 00:00:24.353627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.353656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.353798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.353827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.353944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.353999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.354138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.354167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.354268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.354296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.354417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.354445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.354571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.354615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.354728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.354754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.354897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.354927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.355049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.355083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.355181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.355208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.355409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.355437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.355589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.355619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.355748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.355776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.355919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.355946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.356044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.356077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.356175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.356201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.356292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.356320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.356484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.356511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.356593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.356619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.356762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.356791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.356935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.356962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.357145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.357173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.357266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.357293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.357410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.357439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.357593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.357623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.357713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.357742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.357880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.357914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.358048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.358083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.358174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.358201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.358324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.358350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.358471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.358501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.358628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.358656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.358789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.358819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.358951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.358991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.359120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.359150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.359273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.359301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.359409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.359455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.359580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.359611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.359734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.359761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.359855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.359882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.360000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.360027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.360123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.360151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.360248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.260 [2024-11-20 00:00:24.360277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.260 qpair failed and we were unable to recover it. 00:35:50.260 [2024-11-20 00:00:24.360388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.360416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.360542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.360569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.360691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.360718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.360804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.360833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.360928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.360955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.361106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.361134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.361228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.361266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.361381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.361411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.361538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.361565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.361684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.361712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.361828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.361857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.361957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.361985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.362084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.362113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.362208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.362235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.362327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.362353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.362482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.362511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.362635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.362662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.362753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.362781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.362897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.362926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.363045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.363080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.363199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.363225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.363328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.363375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.363541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.363581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.363711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.363746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.363898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.363944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.364055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.364095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.364255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.364282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.364400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.364429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.364585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.364614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.364714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.364757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.364918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.364948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.365056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.365097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.365238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.365265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.365358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.365384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.365465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.365491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.365637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.365663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.365774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.365800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.366000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.366027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.366133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.366161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.366256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.366285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.366415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.366445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.366550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.261 [2024-11-20 00:00:24.366580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.261 qpair failed and we were unable to recover it. 00:35:50.261 [2024-11-20 00:00:24.366663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.366693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.366828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.366857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.366987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.367017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.367159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.367186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.367269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.367295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.367430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.367459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.367611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.367641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.367771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.367801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.367908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.367939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.368081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.368112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.368231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.368258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.368375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.368402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.368526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.368555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.368651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.368681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.368781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.368811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.368963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.368993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.369141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.369168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.369268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.369295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.369383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.369410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.369510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.369536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.369661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.369706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.369818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.369865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.369968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.369998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.370153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.370181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.370265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.370291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.370388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.370414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.370557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.370583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.370727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.370757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.370886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.370915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.371014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.371044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.371163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.371191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.371316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.371342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.371462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.371492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.371683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.371713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.371819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.371849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.371954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.371984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.372102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.372130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.372230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.372257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.372404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.372447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.372596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.372626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.372741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.372770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.372913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.372942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.373146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.373173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.373263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.373291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.373410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.373454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.373606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.373632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.373740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.262 [2024-11-20 00:00:24.373770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.262 qpair failed and we were unable to recover it. 00:35:50.262 [2024-11-20 00:00:24.373935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.373964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.374200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.374240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.374389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.374435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.374532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.374559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.374656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.374684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.374802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.374830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.374923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.374950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.375063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.375097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.375188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.375216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.375314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.375341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.375456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.375483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.375600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.375627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.375725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.375751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.375895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.375922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.376038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.376078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.376179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.376205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.376302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.376329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.376488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.376523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.376622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.376650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.376749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.376777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.376895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.376922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.377010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.377037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.377163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.377193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.377322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.377351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.377481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.377512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.377671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.377700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.377802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.377831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.377927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.377957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.378136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.378176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.378316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.378348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.378484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.378516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.378682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.378712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.378863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.378915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.379043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.379079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.379189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.379216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.379302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.379329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.379474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.379506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.379607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.379636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.379769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.379801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.379939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.379968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.380100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.380144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.380231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.380262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.380397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.380426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.380545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.380574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.380710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.380740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.380873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.380903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.381009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.381038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.263 qpair failed and we were unable to recover it. 00:35:50.263 [2024-11-20 00:00:24.381170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.263 [2024-11-20 00:00:24.381200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.381302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.381337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.381475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.381521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.381686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.381730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.381882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.381908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.382017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.382057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.382211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.382239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.382332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.382377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.382530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.382575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.382735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.382765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.382877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.382906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.383021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.383048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.383181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.383207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.383338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.383365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.383466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.383496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.383653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.383683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.383818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.383848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.384752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.384794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.384934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.384964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.385085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.385142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.385242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.385269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.385383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.385422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.385557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.385588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.385705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.385732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.385856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.385885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.386038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.386064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.386173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.386200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.386289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.386315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.386446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.386475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.386666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.386695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.386833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.386862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.386989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.387033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.387141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.387169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.387262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.387289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.387421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.387450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.387657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.387687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.387829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.387875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.388018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.388049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.388209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.388237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.388336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.388362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.388501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.388531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.388703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.388733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.388840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.264 [2024-11-20 00:00:24.388870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.264 qpair failed and we were unable to recover it. 00:35:50.264 [2024-11-20 00:00:24.388977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.389007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.389133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.389160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.389278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.389305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.389457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.389487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.389671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.389700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.389801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.389831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.389941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.389976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.390094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.390121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.390246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.390273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.390429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.390472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.390609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.390638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.390783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.390813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.390919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.390949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.391098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.391138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.391249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.391277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.391420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.391464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.391576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.391626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.391774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.391801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.391926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.391958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.392063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.392096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.392298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.392325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.392461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.392491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.392598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.392626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.392747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.392774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.392908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.392935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.393067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.393100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.393240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.393286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.393398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.393443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.393623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.393653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.393817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.393843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.393934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.393961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.394059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.394095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.394224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.394251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.394376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.394403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.394508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.394537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.394638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.394667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.394808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.394837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.394976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.395004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.395111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.395140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.395308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.395353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.395497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.395542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.395710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.395758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.395882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.395909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.396009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.396035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.396189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.396234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.396357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.396401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.396598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.396645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.396778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.396805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.397004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.265 [2024-11-20 00:00:24.397031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.265 qpair failed and we were unable to recover it. 00:35:50.265 [2024-11-20 00:00:24.397164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.397191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.397317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.397343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.397467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.397496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.397614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.397641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.397740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.397766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.397889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.397917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.398032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.398058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.398192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.398219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.398309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.398337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.398459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.398500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.398707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.398735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.398931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.398958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.399082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.399109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.399232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.399258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.399349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.399376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.399497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.399523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.399672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.399698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.399819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.399845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.399941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.399969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.400074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.400102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.400216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.400243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.400331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.400357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.400466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.400494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.400620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.400647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.400741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.400768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.400891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.400917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.401029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.401067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.401196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.401223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.401315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.401342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.401457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.401484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.401604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.401630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.401760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.401787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.401885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.401912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.402000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.402028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.402136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.402164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.402260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.402286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.402379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.402407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.402523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.402550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.402647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.402674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.402792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.402820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.402977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.403004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.403121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.403148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.403235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.403263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.403368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.403395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.403539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.403565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.403659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.403686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.403834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.403861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.404007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.404033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.404151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.404181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.404297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.404330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.266 qpair failed and we were unable to recover it. 00:35:50.266 [2024-11-20 00:00:24.404467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.266 [2024-11-20 00:00:24.404513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.404720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.404747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.404867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.404893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.405013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.405040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.405190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.405230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.405360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.405388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.405488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.405515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.405656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.405686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.405830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.405858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.406006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.406033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.406139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.406166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.406290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.406317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.406444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.406472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.406591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.406634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.406764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.406793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.406885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.406913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.407020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.407046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.407145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.407171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.407294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.407320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.407474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.407500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.407622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.407648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.407761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.407791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.407941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.407970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.408098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.408143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.408230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.408257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.408365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.408392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.408520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.408548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.408657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.408686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.408884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.408943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.409077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.409106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.409251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.409280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.409469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.409511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.409678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.409723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.409809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.409837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.409940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.409966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.410099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.410136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.410250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.410278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.410394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.410420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.410511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.410538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.410627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.410660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.410785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.410812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.410920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.410947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.411041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.411073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.411224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.411251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.411353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.411380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.411473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.411499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.411637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.411664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.411754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.411781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.411903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.411929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.412019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.412045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.412160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.412186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.412283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.412310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.267 qpair failed and we were unable to recover it. 00:35:50.267 [2024-11-20 00:00:24.412399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.267 [2024-11-20 00:00:24.412425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.412565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.412594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.412683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.412710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.412832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.412858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.412985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.413012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.413102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.413129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.413234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.413274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.413409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.413438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.413554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.413582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.413703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.413730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.413862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.413889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.414015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.414044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.414153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.414181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.414267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.414295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.414418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.414444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.414538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.414565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.414680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.414707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.414827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.414855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.414939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.414966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.415120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.415148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.415236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.415264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.415365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.415391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.415517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.415544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.415630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.415657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.415775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.415804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.415912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.415939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.416058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.416093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.416193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.416225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.416319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.416345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.416471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.416497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.416589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.416617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.416722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.416752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.416885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.416912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.417033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.417060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.417170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.417197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.417314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.417341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.417455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.417482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.417639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.417667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.417801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.417827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.417944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.417971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.418098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.418126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.418252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.418279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.418380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.418406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.418524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.418551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.418646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.418674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.418797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.418824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.418928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.418955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.419074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.419111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.268 qpair failed and we were unable to recover it. 00:35:50.268 [2024-11-20 00:00:24.419201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.268 [2024-11-20 00:00:24.419228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.419345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.419372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.419461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.419487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.419576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.419603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.419730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.419757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.419915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.419942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.420076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.420104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.420306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.420333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.420445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.420471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.420626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.420652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.420769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.420796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.420912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.420938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.421047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.421096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.421266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.421294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.421451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.421478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.421593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.421619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.421739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.421766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.421862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.421888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.421983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.422009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.423022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.423058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.423210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.423237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.423371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.423397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.423522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.423549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.423639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.423666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.423761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.423788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.423883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.423910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.424056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.424089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.424227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.424254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.424387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.424414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.424540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.424567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.424690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.424717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.424832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.424858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.424973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.424999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.425144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.425185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.425281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.425310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.425406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.425434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.425590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.425617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.425821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.425848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.425966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.425992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.426143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.426171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.426265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.426291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.426382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.426408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.427478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.427518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.427674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.427704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.427819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.427862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.427984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.428011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.428124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.428151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.428258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.428285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.428380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.428414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.428524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.428550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.428634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.428661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.428772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.428811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.428940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.428970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.269 [2024-11-20 00:00:24.429710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.269 [2024-11-20 00:00:24.429742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.269 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.429868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.429896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.430035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.430063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.430288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.430316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.430440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.430468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.430587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.430615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.430737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.430773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.430876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.430904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.431004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.431033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.431169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.431197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.431291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.431318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.431409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.431436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.431596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.431623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.431747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.431775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.431871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.431898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.432022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.432048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.432194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.432221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.432350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.432377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.432469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.432496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.432659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.432686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.432786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.432813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.432940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.432967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.433064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.433099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.433244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.433271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.433372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.433409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.433505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.433534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.433648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.433674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.433893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.433920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.434003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.434030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.434143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.434170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.434292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.434319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.434416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.434443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.434567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.434593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.434698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.434726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.434836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.434876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.435012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.435049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.435177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.435206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.435291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.435318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.435462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.435492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.435620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.435649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.435800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.435833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.435981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.436011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.436110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.436138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.436226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.436252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.436371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.436402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.436539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.436570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.436662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.436697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.436809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.436837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.436954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.436982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.437079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.437118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.437314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.437341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.437463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.437490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.437630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.437674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.437775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.270 [2024-11-20 00:00:24.437805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.270 qpair failed and we were unable to recover it. 00:35:50.270 [2024-11-20 00:00:24.437900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.437927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.438051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.438089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.438188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.438215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.438309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.438362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.438497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.438526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.438693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.438741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.438847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.438877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.438981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.439006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.439108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.439143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.439235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.439261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.439402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.439444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.439591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.439620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.439723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.439752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.439877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.439906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.440035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.440082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.440206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.440235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.440340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.440368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.440466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.440492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.440612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.440653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.440770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.440804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.440937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.440965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.441085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.441112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.441213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.441240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.441340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.441367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.441490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.441516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.441604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.441656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.441755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.441784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.441935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.441963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.442061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.442094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.442194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.442221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.442337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.442364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.442492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.442537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.442661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.442691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.442790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.442818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.442911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.442937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.443052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.443089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.443175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.443202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.443290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.443316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.443404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.443430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.443580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.443605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.443738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.443767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.443915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.443942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.444074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.444104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.444226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.444253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.444455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.271 [2024-11-20 00:00:24.444499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.271 qpair failed and we were unable to recover it. 00:35:50.271 [2024-11-20 00:00:24.444679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.444726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.444898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.444958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.445075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.445103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.445205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.445232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.445317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.445343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.445460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.445487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.445629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.445659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.445867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.445913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.446034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.446061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.446177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.446205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.446301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.446328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.446447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.446475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.446593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.446621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.446743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.446771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.446855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.446882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.447012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.447042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.447154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.447184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.447281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.447308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.447430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.447456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.447572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.447598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.447688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.447731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.447824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.447866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.448010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.448042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.448172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.448199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.272 [2024-11-20 00:00:24.448285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.272 [2024-11-20 00:00:24.448312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.272 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.448456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.448498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.448613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.448640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.448754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.448801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.448924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.448951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.449041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.449067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.449177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.449204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.449295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.449322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.449438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.449464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.449583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.449630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.449761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.449791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.449923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.449952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.450047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.450088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.450216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.450244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.450387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.450416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.450570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.450599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.450727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.450756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.450884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.450913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.451084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.451137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.451253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.451280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.451409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.451436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.451528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.451554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.451671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.451700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.451843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.451885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.452059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.452093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.452262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.452289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.452460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.452489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.452615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.452643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.452830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.452860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.453034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.273 [2024-11-20 00:00:24.453064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.273 qpair failed and we were unable to recover it. 00:35:50.273 [2024-11-20 00:00:24.453197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.453224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.453357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.453385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.453501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.453527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.453667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.453696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.453851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.453891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.453997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.454026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.454262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.454291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.454414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.454441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.454578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.454621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.454768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.454795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.454892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.454920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.455042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.455075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.455202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.455229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.455320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.455348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.455508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.455540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.455710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.455739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.455904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.455950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.456066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.456098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.456217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.456244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.456337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.456364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.456477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.456504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.456639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.456678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.456809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.456837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.456938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.456964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.457112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.274 [2024-11-20 00:00:24.457140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.274 qpair failed and we were unable to recover it. 00:35:50.274 [2024-11-20 00:00:24.457255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.457282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.457367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.457393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.457483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.457509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.457629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.457657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.457816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.457845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.457949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.457975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.458092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.458120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.458215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.458243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.458362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.458389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.458511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.458537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.458627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.458653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.458767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.458793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.458917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.458943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.459064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.459097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.459213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.459239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.459345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.459371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.459464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.459495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.459612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.459638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.459788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.459814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.459901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.459928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.460079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.460118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.460218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.460243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.460335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.460361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.460447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.460473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.460588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.460614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.460740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.460766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.460884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.460911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.461001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.461027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.461185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.461212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.461299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.461326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.461426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.461452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.461594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.461621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.461716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.461741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.461836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.461862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.461973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.462000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.275 qpair failed and we were unable to recover it. 00:35:50.275 [2024-11-20 00:00:24.462113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.275 [2024-11-20 00:00:24.462150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.462258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.462285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.462388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.462414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.462503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.462529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.462648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.462674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.462767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.462794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.462883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.462909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.463024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.463050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.463157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.463188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.463296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.463323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.463417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.463443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.463552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.463578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.463690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.463716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.463805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.463831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.463922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.463949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.464035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.464061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.464182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.464208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.464314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.464349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.464425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.464452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.464535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.464562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.464647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.464673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.464759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.464785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.464913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.464939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.465040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.465066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.465167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.465193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.465282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.465309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.465440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.465466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.465599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.465626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.465714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.465748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.465868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.465894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.276 [2024-11-20 00:00:24.465980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.276 [2024-11-20 00:00:24.466006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.276 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.466108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.466136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.466231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.466257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.466377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.466404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.466524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.466551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.466698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.466729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.466813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.466839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.466933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.466959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.467076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.467103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.467200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.467226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.467326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.467375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.467543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.467578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.467702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.467731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.467855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.467882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.468005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.468032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.468142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.468169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.468292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.468320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.468412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.468439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.468561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.468588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.468683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.468725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.468828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.468857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.468961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.468990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.469138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.469165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.469280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.469306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.469397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.469424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.469510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.469536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.469616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.469660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.469754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.469785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.469887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.469916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.470002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.470031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.470160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.470188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.277 [2024-11-20 00:00:24.470267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.277 [2024-11-20 00:00:24.470293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.277 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.470434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.470462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.470548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.470576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.470688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.470718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.470810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.470840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.470958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.470984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.471064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.471098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.471192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.471218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.471309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.471350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.471499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.471527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.471615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.471643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.471771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.471800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.471925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.471954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.472079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.472133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.472254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.472281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.472383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.472409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.472497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.472523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.472683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.472726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.472827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.472856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.473051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.473083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.473179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.473218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.473352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.473391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.473604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.473650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.473804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.473848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.473996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.474022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.474137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.474165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.474262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.474289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.474410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.474456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.474601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.474631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.474793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.278 [2024-11-20 00:00:24.474823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.278 qpair failed and we were unable to recover it. 00:35:50.278 [2024-11-20 00:00:24.474950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.474981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.475111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.475139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.475254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.475281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.475375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.475402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.475574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.475604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.475814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.475844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.475955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.475982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.476087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.476132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.476261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.476289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.476412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.476457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.476627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.476658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.476825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.476855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.476992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.477021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.477149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.477176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.477265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.477290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.477373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.477416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.477566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.477595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.477838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.477868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.478011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.478040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.478200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.478227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.478328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.478357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.478487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.478531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.478668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.478711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.478850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.478881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.479001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.479031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.479218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.479265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.479460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.479506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.479641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.479684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.479788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.479818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.479927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.279 [2024-11-20 00:00:24.479954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.279 qpair failed and we were unable to recover it. 00:35:50.279 [2024-11-20 00:00:24.480049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.480083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.480182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.480210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.480330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.480359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.480473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.480503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.480616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.480644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.480808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.480854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.480971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.480998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.481104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.481131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.481226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.481253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.481400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.481443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.481566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.481593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.481757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.481786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.481941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.481998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.482143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.482173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.482291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.482318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.482475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.482503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.482631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.482658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.482750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.482778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.482914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.482953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.483110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.483144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.483246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.483276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.483466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.483496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.483631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.483659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.483801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.483831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.483948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.483976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.484090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.484139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.484266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.280 [2024-11-20 00:00:24.484295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.280 qpair failed and we were unable to recover it. 00:35:50.280 [2024-11-20 00:00:24.484447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.484491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.484641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.484670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.484831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.484875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.484988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.485018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.485143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.485171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.485268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.485294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.485420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.485450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.485654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.485682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.485797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.485833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.485952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.485979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.486108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.486135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.486229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.486257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.486365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.486395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.486520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.486549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.486716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.486760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.486865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.486896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.487052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.487118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.487211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.487237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.487331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.487358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.487462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.487489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.487596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.487655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.487797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.487831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.487970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.487998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.488093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.488131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.488221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.488248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.488375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.488402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.488503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.488532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.488688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.488717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.488804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.488833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.488975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.489004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.489108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.489136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.489262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.489289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.281 qpair failed and we were unable to recover it. 00:35:50.281 [2024-11-20 00:00:24.489469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.281 [2024-11-20 00:00:24.489500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.489610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.489653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.489750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.489776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.489909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.489936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.490024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.490051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.490172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.490204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.490310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.490348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.490459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.490490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.490600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.490645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.490806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.490837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.490968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.490997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.491126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.491153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.491239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.491265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.491403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.491429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.491606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.491633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.491768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.491797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.491970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.491999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.492135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.492162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.492257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.492283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.492401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.492427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.492524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.492550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.492661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.492706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.495106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.495154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.495297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.495325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.495454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.495501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.495616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.495647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.495815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.495845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.495965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.495992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.496095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.496134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.496251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.496278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.496407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.282 [2024-11-20 00:00:24.496434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.282 qpair failed and we were unable to recover it. 00:35:50.282 [2024-11-20 00:00:24.496550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.496598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.496804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.496859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.496962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.496990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.497107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.497135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.497253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.497281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.497392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.497420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.497566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.497592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.497737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.497764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.497858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.497887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.498021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.498060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.498193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.498221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.498320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.498346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.498459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.498491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.498582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.498610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.498706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.498733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.498825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.498852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.498981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.499009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.499115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.499142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.499236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.499262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.499392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.499422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.499553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.499583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.499698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.499727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.499874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.499901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.500000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.500028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.500130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.500158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.500272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.500299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.500455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.500482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.500607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.500634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.500757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.500784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.500905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.500931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.501024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.501051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.283 qpair failed and we were unable to recover it. 00:35:50.283 [2024-11-20 00:00:24.501164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.283 [2024-11-20 00:00:24.501203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.501300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.501327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.501438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.501483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.501624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.501668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.501790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.501816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.501918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.501946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.502100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.502127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.502215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.502245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.502333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.502360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.502511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.502540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.502679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.502709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.502824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.502851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.502987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.503015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.503110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.503137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.503228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.503255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.503373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.503400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.503519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.503546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.503673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.503701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.503816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.503844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.503962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.504003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.504144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.504173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.504265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.504297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.504473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.504508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.504623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.504672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.504823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.504870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.504987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.505013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.505110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.505138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.505225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.505252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.505400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.505426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.505534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.505563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.505758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.505801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.505948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.505974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.506148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.284 [2024-11-20 00:00:24.506180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.284 qpair failed and we were unable to recover it. 00:35:50.284 [2024-11-20 00:00:24.506343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.506403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.506534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.506563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.506720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.506748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.506870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.506897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.507022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.507049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.507185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.507212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.507335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.507364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.507483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.507511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.507603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.507630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.507758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.507785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.507884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.507912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.508033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.508061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.508197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.508224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.508316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.508362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.508455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.508489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.508654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.508704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.508802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.508830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.508964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.508993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.509120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.509147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.509238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.509265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.509412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.509442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.509608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.509639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.509805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.509835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.509976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.510003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.510115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.510143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.285 [2024-11-20 00:00:24.510246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.285 [2024-11-20 00:00:24.510272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.285 qpair failed and we were unable to recover it. 00:35:50.286 [2024-11-20 00:00:24.510428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.286 [2024-11-20 00:00:24.510459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.286 qpair failed and we were unable to recover it. 00:35:50.286 [2024-11-20 00:00:24.510548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.286 [2024-11-20 00:00:24.510591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.286 qpair failed and we were unable to recover it. 00:35:50.286 [2024-11-20 00:00:24.510757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.286 [2024-11-20 00:00:24.510813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.286 qpair failed and we were unable to recover it. 00:35:50.286 [2024-11-20 00:00:24.510949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.286 [2024-11-20 00:00:24.510976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.286 qpair failed and we were unable to recover it. 00:35:50.286 [2024-11-20 00:00:24.511136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.286 [2024-11-20 00:00:24.511163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.286 qpair failed and we were unable to recover it. 00:35:50.286 [2024-11-20 00:00:24.511286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.286 [2024-11-20 00:00:24.511313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.286 qpair failed and we were unable to recover it. 00:35:50.286 [2024-11-20 00:00:24.511419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.286 [2024-11-20 00:00:24.511445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.286 qpair failed and we were unable to recover it. 00:35:50.286 [2024-11-20 00:00:24.511589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.286 [2024-11-20 00:00:24.511618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.286 qpair failed and we were unable to recover it. 00:35:50.286 [2024-11-20 00:00:24.511748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.286 [2024-11-20 00:00:24.511780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.286 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.511916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.511942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.512042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.512074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.512167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.512193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.512330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.512359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.512461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.512490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.512621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.512650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.512780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.512809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.512960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.512987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.513080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.513107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.513196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.513224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.513384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.513423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.513549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.513581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.513760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.513815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.513935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.513964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.514058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.514093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.514187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.514214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.514298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.514325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.514471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.514497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.514582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.514610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.514733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.514761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.514867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.514906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.515030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.515059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.515162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.515189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.515300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.572 [2024-11-20 00:00:24.515326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.572 qpair failed and we were unable to recover it. 00:35:50.572 [2024-11-20 00:00:24.515423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.515450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.515583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.515613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.515770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.515799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.515929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.515958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.516109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.516138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.516316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.516361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.516474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.516523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.516642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.516670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.516817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.516844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.516976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.517021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.517140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.517168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.517269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.517296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.517393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.517418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.517505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.517533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.517635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.517664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.517784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.517812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.517944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.517983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.518113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.518142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.518233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.518261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.518399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.518429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.518536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.518566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.518672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.518702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.518860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.518907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.519035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.519063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.519159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.519185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.519297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.519342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.519478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.519522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.519693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.519738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.519830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.519857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.519986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.520025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.573 [2024-11-20 00:00:24.520190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.573 [2024-11-20 00:00:24.520219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.573 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.520340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.520368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.520525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.520552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.520640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.520666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.520763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.520808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.520942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.520968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.521078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.521123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.521269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.521307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.521474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.521520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.521663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.521691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.521814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.521840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.521924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.521951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.522065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.522099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.522187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.522214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.522357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.522386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.522515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.522545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.522716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.522743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.522843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.522869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.522956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.522982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.523078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.523106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.523238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.523277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.523421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.523455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.523591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.523635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.523728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.523755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.523866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.523893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.523976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.524002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.524124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.524152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.524245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.524290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.524454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.524485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.524608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.524637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.574 [2024-11-20 00:00:24.524764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.574 [2024-11-20 00:00:24.524793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.574 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.524932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.524981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.525082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.525109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.525208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.525239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.525365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.525394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.525487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.525514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.525660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.525688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.525780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.525807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.525926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.525954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.526049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.526088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.526237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.526263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.526349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.526375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.526462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.526488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.526583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.526610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.526700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.526729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.526816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.526842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.526993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.527021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.527172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.527202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.527295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.527339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.527427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.527453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.527586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.527616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.527770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.527799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.527958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.527986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.528187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.528215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.528327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.528374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.528494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.528521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.528652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.528678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.528767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.528793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.528900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.528932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.529065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.575 [2024-11-20 00:00:24.529120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.575 qpair failed and we were unable to recover it. 00:35:50.575 [2024-11-20 00:00:24.529234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.529264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.529381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.529430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.529579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.529627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.529824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.529878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.529990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.530018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.530116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.530143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.530236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.530263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.530351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.530396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.530525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.530555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.530722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.530750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.530844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.530872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.530960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.530987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.531114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.531145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.531280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.531314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.531489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.531538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.531687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.531735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.531875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.531901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.531997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.532024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.532162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.532189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.532288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.532316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.532449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.532480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.532628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.532686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.532843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.532870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.532984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.533024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.533138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.533167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.533315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.533343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.533445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.533474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.533665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.533714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.533828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.533864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.534001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.534028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.534159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.576 [2024-11-20 00:00:24.534188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.576 qpair failed and we were unable to recover it. 00:35:50.576 [2024-11-20 00:00:24.534392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.534422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.534512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.534540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.534711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.534758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.534853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.534879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.534998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.535025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.535154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.535185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.535291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.535320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.535425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.535453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.535649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.535684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.535865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.535928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.536036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.536065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.536231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.536278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.536427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.536474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.536688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.536743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.536839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.536866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.536953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.536983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.537109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.537137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.537235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.537262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.537389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.537415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.537564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.537591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.537685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.537711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.537802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.537829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.537949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.537976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.538115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.538143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.577 qpair failed and we were unable to recover it. 00:35:50.577 [2024-11-20 00:00:24.538266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.577 [2024-11-20 00:00:24.538293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.538392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.538418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.538506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.538533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.538653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.538679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.538774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.538800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.538914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.538940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.539042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.539088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.539257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.539301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.539483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.539529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.539677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.539721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.539835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.539861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.539958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.539984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.540105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.540133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.540228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.540254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.540338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.540365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.540458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.540485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.540574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.540600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.540715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.540741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.540840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.540868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.541000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.541039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.541173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.541207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.541369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.541401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.541563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.541608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.541747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.541791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.541917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.541945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.542029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.542059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.542163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.542207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.542337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.542366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.578 [2024-11-20 00:00:24.542488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.578 [2024-11-20 00:00:24.542537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.578 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.542719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.542767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.542889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.542919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.543038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.543101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.543221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.543254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.543400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.543427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.543579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.543606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.543696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.543739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.543893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.543931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.544073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.544119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.544205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.544232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.544342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.544388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.544547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.544574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.544689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.544716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.544866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.544895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.545032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.545061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.545216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.545242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.545395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.545431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.545605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.545659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.545777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.545820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.545943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.545975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.546085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.546131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.546256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.546283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.546378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.546404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.546500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.546532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.546671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.546701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.546864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.546894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.547023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.547053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 342394 Killed "${NVMF_APP[@]}" "$@" 00:35:50.579 [2024-11-20 00:00:24.547174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.547201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 [2024-11-20 00:00:24.547372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.547401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:50.579 [2024-11-20 00:00:24.547502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.579 [2024-11-20 00:00:24.547531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.579 qpair failed and we were unable to recover it. 00:35:50.579 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:50.579 [2024-11-20 00:00:24.547649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.547678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:50.580 [2024-11-20 00:00:24.547847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.547909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:50.580 [2024-11-20 00:00:24.548038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.548067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.580 [2024-11-20 00:00:24.548177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.548205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.548321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.548356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.548471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.548498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.548622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.548654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.548793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.548823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.548925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.548967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.549126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.549155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.549264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.549294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.549456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.549486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.549604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.549631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.549746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.549775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.549899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.549943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.550073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.550101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.550221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.550250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.550364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.550408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.550556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.550600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.550706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.550735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.550881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.550923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.551031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.551088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.551212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.551240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.551341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.551376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.551516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.551546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.551649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.551679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.551789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.551834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.551957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.551987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.552107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.580 [2024-11-20 00:00:24.552146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.580 qpair failed and we were unable to recover it. 00:35:50.580 [2024-11-20 00:00:24.552239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.552267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=342951 00:35:50.581 [2024-11-20 00:00:24.552415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:50.581 [2024-11-20 00:00:24.552445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 342951 00:35:50.581 [2024-11-20 00:00:24.552586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.552630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 342951 ']' 00:35:50.581 [2024-11-20 00:00:24.552789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.552838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:50.581 [2024-11-20 00:00:24.552951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.552977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:50.581 [2024-11-20 00:00:24.553076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.553104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:50.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:50.581 [2024-11-20 00:00:24.553249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:50.581 [2024-11-20 00:00:24.553277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.581 [2024-11-20 00:00:24.553422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.553470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.553620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.553668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.553804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.553830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.553958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.553984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.554084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.554111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.554198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.554224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.554319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.554347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.554478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.554533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.554664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.554710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.554867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.554897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.555002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.555029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.555159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.555187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.555283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.555310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.555400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.555427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.555531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.555557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.555687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.555715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.555810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.555849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.556000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.556040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.556157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.556185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.556312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.581 [2024-11-20 00:00:24.556338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.581 qpair failed and we were unable to recover it. 00:35:50.581 [2024-11-20 00:00:24.556443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.556470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.556601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.556629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.556777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.556804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.556907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.556936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.557031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.557058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.557166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.557194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.557310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.557349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.557454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.557482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.557575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.557602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.557701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.557729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.557853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.557886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.558012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.558051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.558173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.558201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.558350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.558390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.558537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.558565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.558666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.558692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.558812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.558856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.559002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.559034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.559152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.559179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.559271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.559297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.559435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.559482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.559627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.559659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.559844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.559871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.560020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.560049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.560191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.560220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.560321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.560348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.560500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.560528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.560670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.560722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.560845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.560874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.561000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.561027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.582 qpair failed and we were unable to recover it. 00:35:50.582 [2024-11-20 00:00:24.561127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.582 [2024-11-20 00:00:24.561155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.561268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.561296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.561400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.561426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.561541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.561567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.561691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.561720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.561813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.561840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.562004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.562031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.562131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.562160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.562246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.562272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.562361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.562397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.562525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.562552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.562647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.562673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.562768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.562794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.562907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.562933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.563048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.563081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.563202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.563229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.563325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.563352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.563477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.563504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.563635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.563674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.563779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.563808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.563930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.563957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.564129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.564160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.564261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.564292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.564393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.564423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.583 [2024-11-20 00:00:24.564597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.583 [2024-11-20 00:00:24.564627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.583 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.564756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.564785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.564921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.564951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.565074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.565122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.565290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.565320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.565433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.565476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.565667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.565719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.565815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.565843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.565939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.565976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.566080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.566108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.566280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.566326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.566460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.566506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.566647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.566692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.566799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.566840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.567005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.567045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.567186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.567216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.567342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.567374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.567514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.567544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.567663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.567706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.567882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.567928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.568029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.568057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.568172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.568200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.568337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.568382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.568545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.568595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.568734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.568765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.568898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.568925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.569043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.569076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.569178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.569208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.569342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.569373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.569552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.569606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.569745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.569776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.569881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.569911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.570053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.584 [2024-11-20 00:00:24.570094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.584 qpair failed and we were unable to recover it. 00:35:50.584 [2024-11-20 00:00:24.570179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.570206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.570318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.570346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.570449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.570477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.570619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.570661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.570800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.570831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.570968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.570997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.571088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.571115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.571204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.571231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.571366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.571416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.571599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.571647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.571781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.571809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.571933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.571961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.572111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.572139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.572242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.572270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.572377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.572405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.572501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.572543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.572671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.572699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.572809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.572835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.572930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.572956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.573053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.573091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.573235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.573261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.573350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.573384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.573483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.573509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.573640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.573666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.573761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.573790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.573903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.573943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.574080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.574109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.574202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.574229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.574325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.574352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.574453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.574480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.574572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.574600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.585 qpair failed and we were unable to recover it. 00:35:50.585 [2024-11-20 00:00:24.574738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.585 [2024-11-20 00:00:24.574766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.574929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.574969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.575098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.575127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.575225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.575252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.575354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.575381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.575507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.575534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.575630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.575658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.575756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.575784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.575912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.575940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.576031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.576058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.576157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.576184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.576290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.576316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.576494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.576520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.576642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.576669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.576803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.576831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.576932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.576961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.577066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.577114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.577207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.577235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.577335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.577364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.577491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.577517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.577609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.577635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.577735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.577763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.577874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.577914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.578040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.578074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.578179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.578205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.578325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.578351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.578438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.578469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.578585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.578617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.578764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.578790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.578876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.586 [2024-11-20 00:00:24.578902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.586 qpair failed and we were unable to recover it. 00:35:50.586 [2024-11-20 00:00:24.578995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.579021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.579171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.579210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.579320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.579349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.579444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.579471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.579603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.579631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.579755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.579786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.579908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.579936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.580059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.580095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.580221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.580246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.580339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.580365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.580493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.580519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.580640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.580666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.580751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.580776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.580876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.580917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.581043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.581077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.581201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.581227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.581322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.581348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.581476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.581503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.581599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.581637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.581763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.581789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.581909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.581936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.582022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.582048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.582152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.582180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.582273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.582307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.582432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.582459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.582556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.582592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.582707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.582733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.582830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.582856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.583001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.583028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.583126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.583158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.587 qpair failed and we were unable to recover it. 00:35:50.587 [2024-11-20 00:00:24.583260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.587 [2024-11-20 00:00:24.583288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.583382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.583409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.583532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.583559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.583654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.583680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.583804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.583832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.583928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.583956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.584079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.584107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.584205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.584232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.584326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.584353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.584459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.584486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.584614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.584641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.584730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.584758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.584905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.584931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.585035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.585063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.585203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.585230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.585324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.585360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.585475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.585502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.585630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.585658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.585767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.585806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.585909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.585938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.586082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.586112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.586204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.586232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.586369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.586409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.586519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.586547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.586636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.586663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.586762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.586789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.586883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.586910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.587027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.587053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.587161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.587189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.587289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.587316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.587404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.587431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.588 [2024-11-20 00:00:24.587565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.588 [2024-11-20 00:00:24.587591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.588 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.587712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.587739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.587872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.587917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.588016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.588045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.588182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.588223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.588324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.588353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.588458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.588484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.588642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.588668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.588795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.588833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.588970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.588997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.589119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.589146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.589269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.589296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.589392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.589419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.589520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.589547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.589696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.589723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.589846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.589874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.590005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.590033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.590145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.590173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.590300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.590327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.590423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.590450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.590576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.590602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.590719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.590746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.590835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.590864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.590996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.591029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.591151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.591191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.591290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.591319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.591452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.589 [2024-11-20 00:00:24.591479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.589 qpair failed and we were unable to recover it. 00:35:50.589 [2024-11-20 00:00:24.591608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.591635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.591733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.591761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.591855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.591889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.591984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.592014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.592138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.592165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.592258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.592287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.592403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.592430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.592523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.592550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.592672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.592700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.592825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.592851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.592943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.592969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.593118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.593145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.593236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.593262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.593369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.593408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.593545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.593573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.593705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.593733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.593858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.593886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.594006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.594033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.594136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.594164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.594286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.594313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.594440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.594467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.594552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.594579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.594707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.594734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.594827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.594853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.594977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.595004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.595162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.595190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.595285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.595313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.595413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.595449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.595569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.595597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.596126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.596158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.596284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.596312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.596443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.596470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.596567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.596594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.596695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.596722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.596813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.596839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.596957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.596984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.597130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.590 [2024-11-20 00:00:24.597170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.590 qpair failed and we were unable to recover it. 00:35:50.590 [2024-11-20 00:00:24.597270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.597299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.597457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.597489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.597579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.597607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.597781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.597807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.597908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.597934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.598025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.598057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.598157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.598184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.598281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.598308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.598404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.598430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.598568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.598607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.598712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.598740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.598868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.598896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.598981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.599008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.599129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.599157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.599248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.599275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.599371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.599397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.599519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.599546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.599652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.599680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.599846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.599873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.599971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.599999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.600105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.600132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.600235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.600262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.600364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.600391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.600543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.600569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.600661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.600688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.600804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.600831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.600864] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:35:50.591 [2024-11-20 00:00:24.600933] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:50.591 [2024-11-20 00:00:24.600945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.600972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.601062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.601095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.601197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.601224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.601346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.601383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.601495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.601521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.601647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.601673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.601876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.591 [2024-11-20 00:00:24.601904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.591 qpair failed and we were unable to recover it. 00:35:50.591 [2024-11-20 00:00:24.602022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.602050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.602158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.602185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.602282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.602308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.602413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.602440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.602561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.602587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.602693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.602733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.602838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.602866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.602957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.602985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.603091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.603120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.603221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.603249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.603375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.603402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.603496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.603529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.603682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.603709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.603797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.603825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.603986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.604013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.604115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.604142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.604233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.604261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.604357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.604387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.604504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.604531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.604645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.604671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.604755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.604782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.604881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.604920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.605034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.605081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.605212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.605245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.605345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.605384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.605505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.605532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.605622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.605650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.605772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.592 [2024-11-20 00:00:24.605801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.592 qpair failed and we were unable to recover it. 00:35:50.592 [2024-11-20 00:00:24.605926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.605967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.606065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.606099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.606201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.606229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.606316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.606342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.606453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.606480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.606598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.606627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.606717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.606744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.606847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.606886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.607005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.607034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.607139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.607167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.607264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.607292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.607417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.607454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.607543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.607569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.607657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.607685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.607832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.607864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.607957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.607985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.608085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.608114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.608219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.608246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.608334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.608362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.608528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.608555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.608675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.608703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.608792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.608820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.608926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.608965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.609105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.609139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.609264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.609290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.609412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.609440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.609534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.609561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.609678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.609705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.609828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.609859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.609968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.610007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.610107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.610137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.610236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.610264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.593 [2024-11-20 00:00:24.610351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.593 [2024-11-20 00:00:24.610378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.593 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.610467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.610493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.610613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.610640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.610760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.610788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.610927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.610966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.611100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.611128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.611232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.611258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.611350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.611380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.611469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.611496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.611595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.611621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.611801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.611829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.611921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.611948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.612078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.612105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.612188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.612214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.612298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.612325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.612442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.612469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.612595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.612622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.612743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.612769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.612903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.612933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.613053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.613088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.613188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.613215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.613334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.613371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.613495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.613522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.613612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.613639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.613756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.613782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.613921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.613961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.614132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.614172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.614275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.614304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.614409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.614435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.594 [2024-11-20 00:00:24.614527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.594 [2024-11-20 00:00:24.614562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.594 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.614653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.614680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.614783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.614828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.614957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.614986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.615084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.615113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.615317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.615344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.615472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.615499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.615698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.615726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.615811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.615838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.615931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.615958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.616059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.616094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.616192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.616220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.616316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.616343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.616435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.616462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.616583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.616611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.616708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.616735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.616862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.616889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.616984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.617010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.617104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.617132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.617222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.617249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.617336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.617363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.617463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.617490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.617584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.617610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.617699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.617728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.617815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.617841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.617969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.618009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.618127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.618156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.618272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.618299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.618460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.618486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.618610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.618637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.618753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.618779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.618903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.618930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.619047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.619087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.619188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.619215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.619312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.619338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.619463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.619490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.619613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.619640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.619767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.619794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.619887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.619914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.595 [2024-11-20 00:00:24.620034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.595 [2024-11-20 00:00:24.620059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.595 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.620187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.620213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.620308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.620335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.620425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.620451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.620603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.620630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.620719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.620745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.620880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.620919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.621025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.621055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.621194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.621223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.621347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.621374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.621489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.621516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.621617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.621645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.621731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.621757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.621874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.621900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.621997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.622023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.622123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.622151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.622254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.622281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.622389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.622428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.622531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.622560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.622675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.622703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.622823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.622850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.622999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.623027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.623155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.623182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.623278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.623305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.623392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.623419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.623536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.623562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.623695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.623721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.623831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.623857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.623979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.624006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.624109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.624136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.624223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.624249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.624341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.624368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.624486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.624512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.624607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.624634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.624760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.624787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.624892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.624918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.625033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.625060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.625163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.625189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.625277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.625304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.625431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.625457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.625566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.625592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.625716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.625741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.625861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.625887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.625991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.626024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.626150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.626190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.626299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.626328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.626433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.626461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.626564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.626591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.626697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.626735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.626863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.596 [2024-11-20 00:00:24.626891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.596 qpair failed and we were unable to recover it. 00:35:50.596 [2024-11-20 00:00:24.626977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.627004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.627096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.627123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.627221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.627248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.627339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.627374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.627492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.627518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.627613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.627641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.627727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.627754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.627844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.627875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.627979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.628018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.628148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.628188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.628281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.628309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.628405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.628433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.628588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.628615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.628710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.628738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.628897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.628925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.629029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.629059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.629172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.629201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.629291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.629318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.629418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.629446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.629535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.629562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.629686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.629715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.629843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.629870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.630004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.630043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.630150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.630178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.630305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.630333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.630427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.630455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.630575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.630601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.630693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.630721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.630809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.630836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.630940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.630979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.631101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.631131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.631218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.631245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.631363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.631390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.631502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.631529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.631647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.631681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.631783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.631810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.631910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.631938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.632033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.632060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.632183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.632210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.632303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.597 [2024-11-20 00:00:24.632330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.597 qpair failed and we were unable to recover it. 00:35:50.597 [2024-11-20 00:00:24.632455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.632481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.632574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.632600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.632742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.632782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.632947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.632974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.633087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.633116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.633221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.633248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.633344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.633381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.633461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.633487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.633610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.633638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.633763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.633794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.633904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.633944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.634037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.634066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.634175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.634202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.634300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.634326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.634429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.634456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.634555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.634584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.634688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.634717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.634817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.634844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.634972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.634999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.635105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.635141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.635232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.635259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.635357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.635385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.635507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.635533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.635649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.635676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.635820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.635847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.635937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.635965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.636060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.636097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.636201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.636229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.636344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.636372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.636466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.636492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.598 [2024-11-20 00:00:24.636644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.598 [2024-11-20 00:00:24.636670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.598 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.636774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.636802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.636919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.636947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.637060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.637095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.637193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.637226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.637326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.637353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.637479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.637507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.637595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.637622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.637739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.637767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.637857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.637885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.638016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.638077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.638182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.638211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.638315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.638342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.638440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.638466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.638557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.638583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.638713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.638739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.638842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.638871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.639004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.639043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.639201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.639230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.639357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.639384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.639480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.639506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.639591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.639617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.639737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.639764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.639877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.639903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.640033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.640080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.640184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.640211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.640337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.640368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.640461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.640487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.640619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.640646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.640739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.640766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.640861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.640887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.641016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.641047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.641151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.641180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.641273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.641300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.641398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.641424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.641544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.641570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.641693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.641721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.641836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.641864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.599 qpair failed and we were unable to recover it. 00:35:50.599 [2024-11-20 00:00:24.641984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.599 [2024-11-20 00:00:24.642013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.642131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.642171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.642264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.642292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.642413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.642440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.642560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.642587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.642678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.642704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.642817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.642844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.642945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.642972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.643187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.643226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.643333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.643369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.643495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.643521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.643639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.643665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.643768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.643795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.643884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.643912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.644029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.644055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.644193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.644220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.644319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.644346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.644445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.644471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.644562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.644589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.644671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.644697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.644808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.644847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.644982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.645012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.645120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.645148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.645243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.645270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.645370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.645397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.645516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.645543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.645643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.645670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.645803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.645829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.645927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.645953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.646085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.646112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.646211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.646237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.646330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.646356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.646446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.646472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.646594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.646620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.646741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.646767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.646880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.646907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.646994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.647021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.647121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.647148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.647232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.647261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.647391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.647430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.647563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.647594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.647688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.647716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.600 [2024-11-20 00:00:24.647832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.600 [2024-11-20 00:00:24.647859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.600 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.647952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.647978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.648065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.648096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.648193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.648219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.648345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.648374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.648497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.648527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.648648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.648676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.648796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.648822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.648935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.648961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.649049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.649093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.649177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.649202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.649325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.649354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.649468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.649496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.649592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.649620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.649749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.649777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.649897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.649924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.650044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.650075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.650174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.650202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.650292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.650325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.650434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.650461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.650607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.650635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.650752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.650779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.650875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.650901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.650988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.651014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.651126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.651154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.651271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.651298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.651400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.651426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.651528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.651555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.651763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.651812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.651964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.651992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.652118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.652146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.652264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.652291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.652394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.652421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.652518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.652545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.652644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.652671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.652791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.652821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.652925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.652965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.653128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.653158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.653249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.653275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.601 qpair failed and we were unable to recover it. 00:35:50.601 [2024-11-20 00:00:24.653390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.601 [2024-11-20 00:00:24.653416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.653532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.653560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.653648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.653675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.653798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.653827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.653920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.653949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.654035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.654082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.654191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.654223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.654340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.654367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.654479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.654506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.654625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.654652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.654788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.654817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.654938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.654965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.655048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.655082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.655207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.655234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.655317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.655344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.655464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.655491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.655615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.655641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.655753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.655780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.655934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.655973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.656083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.656112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.656244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.656273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.656405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.656432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.656579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.656606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.656696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.656723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.656840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.656866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.656953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.656980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.657080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.657108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.657203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.657232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.657329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.657355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.657476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.657507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.657597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.657624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.657744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.657770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.657858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.657886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.658007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.658034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.658206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.658246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.658348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.658378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.658503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.658530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.658620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.658646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.658789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.658816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.658931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.658957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.659043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.659076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.659192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.659220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.659367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.659407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-11-20 00:00:24.659505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.602 [2024-11-20 00:00:24.659534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.659660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.659688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.659782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.659809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.659963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.659995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.660094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.660121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.660215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.660243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.660341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.660369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.660461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.660489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.660617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.660645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.660744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.660770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.660888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.660916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.661012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.661039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.661146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.661174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.661295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.661321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.661410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.661437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.661534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.661560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.661691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.661717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.661838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.661865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.662004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.662044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.662177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.662206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.662305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.662333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.662450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.662477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.662601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.662628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.662728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.662757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.662845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.662872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.662956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.662984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.663086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.663113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.663203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.663231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.663318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.663346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.663469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.663508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.663628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.663663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.663761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.663788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.663909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.603 [2024-11-20 00:00:24.663936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-11-20 00:00:24.664081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.664108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.664201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.664228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.664324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.664352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.664485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.664513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.664618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.664657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.664755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.664783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.664906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.664934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.665031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.665057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.665180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.665206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.665300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.665326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.665445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.665471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.665594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.665621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.665743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.665771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.665882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.665922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.666058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.666097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.666221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.666248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.666337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.666364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.666459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.666485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.666613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.666640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.666741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.666769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.666898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.666938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.667040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.667077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.667174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.667202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.667325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.667352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.667454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.667482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.667603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.667629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.667723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.667751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.667839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.667865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.667956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.667983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.668131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.668158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.668248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.668276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.668370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.668397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.668524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.668551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.668668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.668695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.668780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.668807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-11-20 00:00:24.668926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.604 [2024-11-20 00:00:24.668952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.669075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.669102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.669199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.669231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.669325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.669351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.669443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.669470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.669562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.669597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.669720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.669748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.669848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.669876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.670011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.670050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.670177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.670205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.670305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.670332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.670420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.670447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.670549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.670577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.670664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.670692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.670811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.670838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.670932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.670958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.671090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.671118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.671197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.671224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.671305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.671331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.671422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.671449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.671541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.671569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.671689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.671716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.671861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.671889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.672008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.672037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.672170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.672198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.672280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.672306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.672439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.672466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.672582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.672609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.672727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.672754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.672865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.672911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.673029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.673075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.673181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.673209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.673331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.673358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.673471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.673498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.673588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.673615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.673713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.673741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.673858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.673888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.673977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.674005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.674120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.674148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.674232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.674259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.674407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.674434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.674561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.674588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.674674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.605 [2024-11-20 00:00:24.674701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.605 qpair failed and we were unable to recover it. 00:35:50.605 [2024-11-20 00:00:24.674823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.674850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.674939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.674967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.675107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.675136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.675221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.675248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.675330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.675356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.675511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.675538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.675673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.675711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.675832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.675860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.675973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.676001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.676101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.676129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.676218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.676245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.676363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.676389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.676468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.676495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.676587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.676615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.676708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.676735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.676827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.676853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.676942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.676969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.677104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.677144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.677268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.677296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.677393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.677420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.677504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.677532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.677620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.677647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.677760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.677786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.677878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.677905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.677986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.678013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.678136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.678165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.678259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.678291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.678412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.678440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.678589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.678615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.678718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.678747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.678850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.678877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.678995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.679023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.679127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.679155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.679243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.679271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.679364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.679391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.679514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.679541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.679657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.679684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.679774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.679802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.679900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.679929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.680014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.680040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.680171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.680198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.680290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.680317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.606 qpair failed and we were unable to recover it. 00:35:50.606 [2024-11-20 00:00:24.680441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.606 [2024-11-20 00:00:24.680467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.680552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.680578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.680736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.680764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.680926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.680966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.681063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.681098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.681190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.681216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.681326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.681352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.681451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.681478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.681508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:50.607 [2024-11-20 00:00:24.681566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.681592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.681714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.681743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.681856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.681883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.681977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.682003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.682095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.682123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.682214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.682241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.682327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.682353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.682470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.682497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.682608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.682647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.682750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.682779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.682874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.682901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.682985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.683011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.683144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.683171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.683290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.683317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.683406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.683433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.683572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.683598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.683724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.683752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.683846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.683873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.683961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.683987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.684087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.684115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.684238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.684265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.684356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.607 [2024-11-20 00:00:24.684383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.607 qpair failed and we were unable to recover it. 00:35:50.607 [2024-11-20 00:00:24.684488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.684515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.684658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.684685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.684791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.684830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.684964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.684992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.685113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.685141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.685236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.685262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.685392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.685418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.685512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.685544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.685642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.685670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.685789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.685817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.685916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.685943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.686062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.686097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.686182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.686208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.686312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.686339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.686430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.686457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.686566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.686606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.686714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.686762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.686888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.686917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.687039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.687065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.687170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.687198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.687291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.687317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.687449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.687478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.687576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.687603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.687708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.687736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.687856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.687884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.688012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.688040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.688146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.688178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.688282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.688309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.688421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.688449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.688580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.688607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.688710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.688738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.688825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.688852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.688951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.688979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.689102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.689130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.689270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.689309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.689425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.689456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.689577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.689604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.689695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.689723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.689875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.689902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.690002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.690031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.690130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.690158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.690250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.690277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.690378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.690406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.608 qpair failed and we were unable to recover it. 00:35:50.608 [2024-11-20 00:00:24.690503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.608 [2024-11-20 00:00:24.690530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.690649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.690676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.690794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.690821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.690949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.690976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.691079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.691115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.691209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.691236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.691385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.691411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.691506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.691532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.691671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.691698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.691784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.691810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.691935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.691961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.692101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.692131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.692228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.692255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.692368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.692404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.692525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.692551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.692645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.692672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.692791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.692818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.692910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.692937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.693036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.693063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.693229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.693257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.693344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.693371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.693456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.693483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.693574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.693602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.693724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.693751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.693834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.693865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.693966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.693994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.694081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.694108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.694233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.694260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.694372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.694398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.694545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.694571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.694685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.694712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.694800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.694831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.694929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.694955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.695039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.695077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.695174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.695201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.695325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.695351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.695458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.695484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.695569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.695595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.695693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.695721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.695854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.609 [2024-11-20 00:00:24.695894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.609 qpair failed and we were unable to recover it. 00:35:50.609 [2024-11-20 00:00:24.695995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.696024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.696133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.696160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.696265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.696292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.696404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.696431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.696558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.696585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.696687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.696715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.696835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.696867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.696989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.697022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.697129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.697157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.697277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.697303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.697431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.697458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.697580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.697606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.697732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.697759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.697852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.697878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.697970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.697997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.698118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.698146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.698264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.698290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.698405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.698432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.698532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.698563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.698647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.698674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.698799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.698827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.698957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.698997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.699122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.699162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.699287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.699315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.699441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.699469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.699571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.699605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.699703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.699731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.699827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.699855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.699950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.699977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.700102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.700130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.700226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.700256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.700351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.700379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.700529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.700557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.700693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.700731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.700820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.700847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.610 [2024-11-20 00:00:24.700969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.610 [2024-11-20 00:00:24.700996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.610 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.701101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.701129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.701243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.701269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.701368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.701395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.701478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.701505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.701624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.701650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.701740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.701768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.701920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.701948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.702062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.702095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.702187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.702215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.702373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.702400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.702518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.702544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.702664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.702690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.702810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.702838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.702943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.702982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.703085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.703113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.703259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.703286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.703385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.703410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.703494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.703520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.703629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.703668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.703779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.703807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.703918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.703958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.704059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.704092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.704212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.704243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.704337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.704363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.704494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.704522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.704621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.704647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.704763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.704789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.704903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.704942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.705084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.705113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.705244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.705271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.705367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.705393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.705480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.705513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.705636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.705662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.705755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.705783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.705887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.705927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.706041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.706099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.706257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.706286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.706392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.706421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.611 [2024-11-20 00:00:24.706554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.611 [2024-11-20 00:00:24.706582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.611 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.706676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.706702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.706829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.706856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.706966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.707005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.707109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.707137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.707265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.707292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.707414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.707441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.707556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.707583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.707707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.707738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.707858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.707886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.707982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.708012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.708140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.708180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.708314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.708343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.708466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.708494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.612 [2024-11-20 00:00:24.708587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.612 [2024-11-20 00:00:24.708615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.612 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.708739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.708766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.708886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.708912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.709044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.709098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.709208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.709238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.709332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.709360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.709458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.709484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.709578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.709613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.709736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.709762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.709854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.709882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.710009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.710044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.710155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.710183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.710281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.710308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.710410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.710439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.710530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.710557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.710706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.710733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.710847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.710886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.710984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.711013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.711134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.711161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.711250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.711277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.711375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.711401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.711526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.711553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.711676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.711706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.711795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.711822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.711969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.711997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.712114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.712141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.712236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.712263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.712342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.712377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.712470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.712495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.712605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.712636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.712766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.712796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.712889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.712917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.713033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.713059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.713150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.713176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.713306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.713332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.713422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.713448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.713544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.713570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.713696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.713727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.613 [2024-11-20 00:00:24.713819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.613 [2024-11-20 00:00:24.713847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.613 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.713982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.714022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.714135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.714163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.714283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.714310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.714426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.714453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.714577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.714603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.714690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.714717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.714823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.714862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.715004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.715045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.715167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.715195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.715295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.715321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.715453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.715489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.715583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.715611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.715742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.715770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.715863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.715889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.716009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.716035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.716164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.716190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.716279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.716305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.716415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.716441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.716523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.716549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.716670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.716697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.716822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.716849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.716941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.716970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.717092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.717119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.717218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.717250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.717375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.717404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.717507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.717535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.717629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.717656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.717749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.717776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.717887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.717913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.718033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.718059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.718154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.718190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.718287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.718314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.718409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.718436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.718558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.718584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.718680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.718706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.718809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.718849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.718947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.718975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.614 [2024-11-20 00:00:24.719113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.614 [2024-11-20 00:00:24.719142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.614 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.719236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.719269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.719365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.719392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.719483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.719509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.719597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.719625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.719717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.719744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.719844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.719873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.719989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.720015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.720140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.720169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.720253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.720279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.720378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.720406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.720505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.720532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.720619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.720646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.720794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.720829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.720951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.720977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.721082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.721109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.721195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.721221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.721314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.721341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.721428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.721457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.721543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.721572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.721719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.721748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.721894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.721920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.722014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.722040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.722151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.722177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.722275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.722302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.722394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.722420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.722540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.722568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.722660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.615 [2024-11-20 00:00:24.722687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.615 qpair failed and we were unable to recover it. 00:35:50.615 [2024-11-20 00:00:24.722781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.722814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.722913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.722941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.723065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.723105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.723200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.723227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.723342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.723369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.723487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.723515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.723608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.723634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.723728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.723755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.723851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.723878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.724008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.724047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.724180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.724208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.724302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.724330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.724453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.724487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.724600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.724626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.724715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.724743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.724839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.724867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.725004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.725043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.725147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.725175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.725272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.725298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.725398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.725425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.725547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.725573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.725700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.725729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.725854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.725882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.726017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.726057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.726168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.726197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.726322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.726349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.726443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.726470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.726576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.726605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.726751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.726778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.616 qpair failed and we were unable to recover it. 00:35:50.616 [2024-11-20 00:00:24.726868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.616 [2024-11-20 00:00:24.726896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.727013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.727039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.727142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.727171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.727290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.727317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.727397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.727422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.727540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.727566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.727649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.727676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.727830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.727859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.727946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.727972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.728065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.728099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.728202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.728229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.728316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.728341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.728444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.728470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.728551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.728577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.728669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.728696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.728779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.728805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.728909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.728937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.729036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.729064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.729215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.729242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.729326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.729352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.729468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.729494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.729590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.729617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.729713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.729740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.729857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.729886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.730018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.730057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.730176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.730205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.730298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.730327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.730425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.730452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.617 [2024-11-20 00:00:24.730554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.617 [2024-11-20 00:00:24.730581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.617 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.730737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.730764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.730891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.730919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.731044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.731079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.731173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.731200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.731293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.731319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.731410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.731438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.731529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.731556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.731641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.731669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.731758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.731786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.731881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.731915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.732008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.732036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.732140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.732168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.732262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.732289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.732384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.732411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.732516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.732543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.732636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.732664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.732713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:50.618 [2024-11-20 00:00:24.732748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:50.618 [2024-11-20 00:00:24.732763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:50.618 [2024-11-20 00:00:24.732776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:50.618 [2024-11-20 00:00:24.732776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.732788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:50.618 [2024-11-20 00:00:24.732803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.732925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.732951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.733041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.733065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.733175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.733202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.733322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.733348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.733473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.733500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.733616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.733642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.733729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.733755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.733885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.733913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.734010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.734039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.734159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.734198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.734300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.734331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.734410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:50.618 [2024-11-20 00:00:24.734475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.734502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.734435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:50.618 [2024-11-20 00:00:24.734605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.734630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.734484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:50.618 [2024-11-20 00:00:24.734735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.734761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.734858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.734487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:50.618 [2024-11-20 00:00:24.734887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.735006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.735031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.618 qpair failed and we were unable to recover it. 00:35:50.618 [2024-11-20 00:00:24.735142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.618 [2024-11-20 00:00:24.735168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.735261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.735287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.735395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.735421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.735505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.735531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.735622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.735648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.735743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.735768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.735877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.735917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.736010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.736037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.736137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.736167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.736260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.736287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.736392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.736424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.736513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.736540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.736624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.736651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.736743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.736775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.736921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.736960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.737059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.737094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.737190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.737217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.737304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.737330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.737433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.737461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.737560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.737588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.737683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.737710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.737819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.737845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.737950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.737977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.738081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.738108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.738205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.738231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.738321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.738348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.738454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.738481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.738578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.738604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.738691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.738718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.738799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.738824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.738905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.738931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.739023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.739049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.739151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.619 [2024-11-20 00:00:24.739178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.619 qpair failed and we were unable to recover it. 00:35:50.619 [2024-11-20 00:00:24.739309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.739338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.739440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.739467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.739568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.739608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.739718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.739748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.739844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.739872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.739994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.740022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.740122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.740150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.740244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.740277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.740387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.740414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.740499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.740527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.740653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.740680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.740770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.740798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.740886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.740914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.741004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.741031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.741125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.741152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.741248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.741273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.741360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.741386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.741476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.741504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.741607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.741635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.741761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.741789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.741915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.741943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.742077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.742105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.742212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.742239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.742350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.742377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.742481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.742509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.742593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.742620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.742700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.742727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.742827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.742853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.742957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.742984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.743075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.620 [2024-11-20 00:00:24.743102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.620 qpair failed and we were unable to recover it. 00:35:50.620 [2024-11-20 00:00:24.743223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.621 [2024-11-20 00:00:24.743250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.621 qpair failed and we were unable to recover it. 00:35:50.621 [2024-11-20 00:00:24.743351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.621 [2024-11-20 00:00:24.743377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.621 qpair failed and we were unable to recover it. 00:35:50.621 [2024-11-20 00:00:24.743477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.621 [2024-11-20 00:00:24.743503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.621 qpair failed and we were unable to recover it. 00:35:50.621 [2024-11-20 00:00:24.743614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.621 [2024-11-20 00:00:24.743653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.621 qpair failed and we were unable to recover it. 00:35:50.621 [2024-11-20 00:00:24.743791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.621 [2024-11-20 00:00:24.743831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.621 qpair failed and we were unable to recover it. 00:35:50.621 [2024-11-20 00:00:24.743951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.621 [2024-11-20 00:00:24.743980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.621 qpair failed and we were unable to recover it. 00:35:50.621 [2024-11-20 00:00:24.744077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.621 [2024-11-20 00:00:24.744106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.621 qpair failed and we were unable to recover it. 00:35:50.621 [2024-11-20 00:00:24.744234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.621 [2024-11-20 00:00:24.744261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.621 qpair failed and we were unable to recover it. 00:35:50.621 [2024-11-20 00:00:24.744355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.621 [2024-11-20 00:00:24.744383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.621 qpair failed and we were unable to recover it. 00:35:50.621 [2024-11-20 00:00:24.744531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.621 [2024-11-20 00:00:24.744559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.621 qpair failed and we were unable to recover it. 00:35:50.621 [2024-11-20 00:00:24.744653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.621 [2024-11-20 00:00:24.744680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.621 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.744778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.744807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.744899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.744927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.745016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.745043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.745143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.745170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.745265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.745292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.745379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.745406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.745547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.745574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.745675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.745702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.745790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.745817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.745940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.745967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.746066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.746106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.746215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.746242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.746336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.746362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.746452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.746478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.746596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.746623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.746713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.746741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.746833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.746860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.746946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.746973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.747066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.747099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.747188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.747214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.747337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.747374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.747489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.747516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.747610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.747638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.747746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.747773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.747875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.747902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.747987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.748013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.748116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.748143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.748238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.748264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.748357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.748383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.748478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.748505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.748660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.622 [2024-11-20 00:00:24.748686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.622 qpair failed and we were unable to recover it. 00:35:50.622 [2024-11-20 00:00:24.748776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.748803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.748894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.748921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.749033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.749061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.749206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.749246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.749350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.749379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.749472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.749500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.749584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.749610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.749711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.749739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.749863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.749890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.749990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.750018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.750119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.750147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.750269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.750297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.750394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.750421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.750520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.750549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.750652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.750681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.750801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.750828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.750927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.750953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.751044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.751078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.751167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.751194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.751278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.751305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.751398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.751426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.751539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.751566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.751662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.751690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.751772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.751799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.751909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.751936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.752052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.752087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.752186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.752213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.752348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.752377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.623 qpair failed and we were unable to recover it. 00:35:50.623 [2024-11-20 00:00:24.752471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.623 [2024-11-20 00:00:24.752498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.752593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.752623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.752735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.752775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.752878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.752907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.752996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.753023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.753129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.753157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.753246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.753274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.753381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.753408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.753535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.753562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.753652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.753679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.753777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.753803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.753890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.753917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.754010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.754036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.754166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.754195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.754286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.754314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.754414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.754442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.754562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.754589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.754683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.754710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.754809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.754837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.754929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.754960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.755091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.624 [2024-11-20 00:00:24.755119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.624 qpair failed and we were unable to recover it. 00:35:50.624 [2024-11-20 00:00:24.755231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.755259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.755347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.755374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.755487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.755514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.755613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.755641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.755743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.755771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.755864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.755891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.756002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.756031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.756131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.756159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.756246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.756273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.756414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.756449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.756539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.756566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.756668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.756696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.756791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.756817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.756901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.756929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.757018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.757045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.757155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.757182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.757276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.757303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.757396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.757423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.757516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.757543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.757640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.757667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.757760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.757788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.757879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.757906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.758026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.758054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.758152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.758179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.758275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.758302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.758403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.758430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.758517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.758545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.758672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.758698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.758783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.758809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.625 [2024-11-20 00:00:24.758902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.625 [2024-11-20 00:00:24.758929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.625 qpair failed and we were unable to recover it. 00:35:50.626 [2024-11-20 00:00:24.759018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.626 [2024-11-20 00:00:24.759045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.626 qpair failed and we were unable to recover it. 00:35:50.626 [2024-11-20 00:00:24.759139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.626 [2024-11-20 00:00:24.759168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.626 qpair failed and we were unable to recover it. 00:35:50.626 [2024-11-20 00:00:24.759254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.626 [2024-11-20 00:00:24.759282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.626 qpair failed and we were unable to recover it. 00:35:50.626 [2024-11-20 00:00:24.759385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.626 [2024-11-20 00:00:24.759413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.626 qpair failed and we were unable to recover it. 00:35:50.626 [2024-11-20 00:00:24.759537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.626 [2024-11-20 00:00:24.759565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.626 qpair failed and we were unable to recover it. 00:35:50.626 [2024-11-20 00:00:24.759659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.626 [2024-11-20 00:00:24.759688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.626 qpair failed and we were unable to recover it. 00:35:50.626 [2024-11-20 00:00:24.759819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.626 [2024-11-20 00:00:24.759846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.626 qpair failed and we were unable to recover it. 00:35:50.626 [2024-11-20 00:00:24.759937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.626 [2024-11-20 00:00:24.759965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.626 qpair failed and we were unable to recover it. 00:35:50.626 [2024-11-20 00:00:24.760093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.626 [2024-11-20 00:00:24.760120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.626 qpair failed and we were unable to recover it. 00:35:50.626 [2024-11-20 00:00:24.760214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.626 [2024-11-20 00:00:24.760241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.626 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.760320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.760346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.760443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.760476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.760568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.760596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.760683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.760710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.760809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.760835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.760961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.760988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.761091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.761119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.761212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.761243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.761368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.761395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.761491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.761517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.761620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.761648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.761796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.761822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.761908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.761934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.762055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.762088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.762185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.762212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.762294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.762320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.762412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.762439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.762549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.762575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.762664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.762689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.762772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.762798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.762896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.762922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.763032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.763079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.763181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.628 [2024-11-20 00:00:24.763211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.628 qpair failed and we were unable to recover it. 00:35:50.628 [2024-11-20 00:00:24.763323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.763356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.763457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.763484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.763625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.763653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.763749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.763775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.763884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.763911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.763999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.764028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.764147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.764175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.764274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.764301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.764397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.764424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.764513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.764539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.764662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.764688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.764792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.764824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.764916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.764942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.765032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.765058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.765154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.765180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.765279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.765305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.629 [2024-11-20 00:00:24.765401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.629 [2024-11-20 00:00:24.765427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.629 qpair failed and we were unable to recover it. 00:35:50.632 [2024-11-20 00:00:24.765518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.633 [2024-11-20 00:00:24.765546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.633 qpair failed and we were unable to recover it. 00:35:50.633 [2024-11-20 00:00:24.765657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.633 [2024-11-20 00:00:24.765684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.633 qpair failed and we were unable to recover it. 00:35:50.633 [2024-11-20 00:00:24.765775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.633 [2024-11-20 00:00:24.765803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.633 qpair failed and we were unable to recover it. 00:35:50.633 [2024-11-20 00:00:24.765888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.633 [2024-11-20 00:00:24.765915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.633 qpair failed and we were unable to recover it. 00:35:50.633 [2024-11-20 00:00:24.766007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.633 [2024-11-20 00:00:24.766034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.633 qpair failed and we were unable to recover it. 00:35:50.633 [2024-11-20 00:00:24.766167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.633 [2024-11-20 00:00:24.766196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.633 qpair failed and we were unable to recover it. 00:35:50.633 [2024-11-20 00:00:24.766282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.633 [2024-11-20 00:00:24.766309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.633 qpair failed and we were unable to recover it. 00:35:50.633 [2024-11-20 00:00:24.766406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.633 [2024-11-20 00:00:24.766439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.633 qpair failed and we were unable to recover it. 00:35:50.633 [2024-11-20 00:00:24.766560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.633 [2024-11-20 00:00:24.766587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.633 qpair failed and we were unable to recover it. 00:35:50.633 [2024-11-20 00:00:24.766667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.633 [2024-11-20 00:00:24.766693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.633 qpair failed and we were unable to recover it. 00:35:50.633 [2024-11-20 00:00:24.766790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.633 [2024-11-20 00:00:24.766816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.633 qpair failed and we were unable to recover it. 00:35:50.634 [2024-11-20 00:00:24.766910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.634 [2024-11-20 00:00:24.766937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.634 qpair failed and we were unable to recover it. 00:35:50.634 [2024-11-20 00:00:24.767032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.634 [2024-11-20 00:00:24.767061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.634 qpair failed and we were unable to recover it. 00:35:50.634 [2024-11-20 00:00:24.767197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.634 [2024-11-20 00:00:24.767225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.634 qpair failed and we were unable to recover it. 00:35:50.634 [2024-11-20 00:00:24.767312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.634 [2024-11-20 00:00:24.767339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.634 qpair failed and we were unable to recover it. 00:35:50.634 [2024-11-20 00:00:24.767425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.634 [2024-11-20 00:00:24.767452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.634 qpair failed and we were unable to recover it. 00:35:50.634 [2024-11-20 00:00:24.767550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.634 [2024-11-20 00:00:24.767578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.634 qpair failed and we were unable to recover it. 00:35:50.634 [2024-11-20 00:00:24.767669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.634 [2024-11-20 00:00:24.767696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.634 qpair failed and we were unable to recover it. 00:35:50.634 [2024-11-20 00:00:24.767813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.634 [2024-11-20 00:00:24.767838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.634 qpair failed and we were unable to recover it. 00:35:50.634 [2024-11-20 00:00:24.767958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.634 [2024-11-20 00:00:24.767985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.634 qpair failed and we were unable to recover it. 00:35:50.634 [2024-11-20 00:00:24.768081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.634 [2024-11-20 00:00:24.768109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.634 qpair failed and we were unable to recover it. 00:35:50.634 [2024-11-20 00:00:24.768212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.768246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.768347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.768373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.768495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.768521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.768610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.768647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.768740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.768768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.768889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.768916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.769006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.769034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.769164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.769193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.769286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.769313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.769414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.769440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.769532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.769558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.769647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.769674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.769789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.769815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.635 qpair failed and we were unable to recover it. 00:35:50.635 [2024-11-20 00:00:24.769934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.635 [2024-11-20 00:00:24.769969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.642 qpair failed and we were unable to recover it. 00:35:50.642 [2024-11-20 00:00:24.770059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.642 [2024-11-20 00:00:24.770092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.642 qpair failed and we were unable to recover it. 00:35:50.642 [2024-11-20 00:00:24.770179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.642 [2024-11-20 00:00:24.770205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.642 qpair failed and we were unable to recover it. 00:35:50.642 [2024-11-20 00:00:24.770297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.642 [2024-11-20 00:00:24.770323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.642 qpair failed and we were unable to recover it. 00:35:50.642 [2024-11-20 00:00:24.770438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.642 [2024-11-20 00:00:24.770464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.642 qpair failed and we were unable to recover it. 00:35:50.642 [2024-11-20 00:00:24.770545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.643 [2024-11-20 00:00:24.770573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.643 qpair failed and we were unable to recover it. 00:35:50.643 [2024-11-20 00:00:24.770662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.643 [2024-11-20 00:00:24.770689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.643 qpair failed and we were unable to recover it. 00:35:50.643 [2024-11-20 00:00:24.770798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.643 [2024-11-20 00:00:24.770839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.643 qpair failed and we were unable to recover it. 00:35:50.643 [2024-11-20 00:00:24.770944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.643 [2024-11-20 00:00:24.770973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.643 qpair failed and we were unable to recover it. 00:35:50.643 [2024-11-20 00:00:24.771064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.643 [2024-11-20 00:00:24.771099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.643 qpair failed and we were unable to recover it. 00:35:50.643 [2024-11-20 00:00:24.771193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.643 [2024-11-20 00:00:24.771231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.643 qpair failed and we were unable to recover it. 00:35:50.643 [2024-11-20 00:00:24.771312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.643 [2024-11-20 00:00:24.771340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.643 qpair failed and we were unable to recover it. 00:35:50.643 [2024-11-20 00:00:24.771448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.643 [2024-11-20 00:00:24.771474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.643 qpair failed and we were unable to recover it. 00:35:50.643 [2024-11-20 00:00:24.771563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.643 [2024-11-20 00:00:24.771589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.643 qpair failed and we were unable to recover it. 00:35:50.643 [2024-11-20 00:00:24.771682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.643 [2024-11-20 00:00:24.771710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.643 qpair failed and we were unable to recover it. 00:35:50.643 [2024-11-20 00:00:24.771841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.644 [2024-11-20 00:00:24.771868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.644 qpair failed and we were unable to recover it. 00:35:50.644 [2024-11-20 00:00:24.771954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.644 [2024-11-20 00:00:24.771981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.644 qpair failed and we were unable to recover it. 00:35:50.644 [2024-11-20 00:00:24.772079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.644 [2024-11-20 00:00:24.772110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.644 qpair failed and we were unable to recover it. 00:35:50.644 [2024-11-20 00:00:24.772241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.644 [2024-11-20 00:00:24.772269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.644 qpair failed and we were unable to recover it. 00:35:50.644 [2024-11-20 00:00:24.772356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.644 [2024-11-20 00:00:24.772383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.644 qpair failed and we were unable to recover it. 00:35:50.644 [2024-11-20 00:00:24.772510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.644 [2024-11-20 00:00:24.772537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.644 qpair failed and we were unable to recover it. 00:35:50.644 [2024-11-20 00:00:24.772632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.644 [2024-11-20 00:00:24.772658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.644 qpair failed and we were unable to recover it. 00:35:50.644 [2024-11-20 00:00:24.772740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.644 [2024-11-20 00:00:24.772766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.644 qpair failed and we were unable to recover it. 00:35:50.644 [2024-11-20 00:00:24.772874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.644 [2024-11-20 00:00:24.772903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.644 qpair failed and we were unable to recover it. 00:35:50.644 [2024-11-20 00:00:24.772991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.644 [2024-11-20 00:00:24.773018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.644 qpair failed and we were unable to recover it. 00:35:50.644 [2024-11-20 00:00:24.773173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.644 [2024-11-20 00:00:24.773201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.644 qpair failed and we were unable to recover it. 00:35:50.644 [2024-11-20 00:00:24.773296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.644 [2024-11-20 00:00:24.773324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.644 qpair failed and we were unable to recover it. 00:35:50.644 [2024-11-20 00:00:24.773418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.645 [2024-11-20 00:00:24.773451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.645 qpair failed and we were unable to recover it. 00:35:50.645 [2024-11-20 00:00:24.773550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.645 [2024-11-20 00:00:24.773578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.645 qpair failed and we were unable to recover it. 00:35:50.645 [2024-11-20 00:00:24.773707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.645 [2024-11-20 00:00:24.773735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.645 qpair failed and we were unable to recover it. 00:35:50.645 [2024-11-20 00:00:24.773821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.645 [2024-11-20 00:00:24.773847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.645 qpair failed and we were unable to recover it. 00:35:50.645 [2024-11-20 00:00:24.773941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.645 [2024-11-20 00:00:24.773967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.645 qpair failed and we were unable to recover it. 00:35:50.645 [2024-11-20 00:00:24.774055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.645 [2024-11-20 00:00:24.774095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.645 qpair failed and we were unable to recover it. 00:35:50.645 [2024-11-20 00:00:24.774211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.645 [2024-11-20 00:00:24.774237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.645 qpair failed and we were unable to recover it. 00:35:50.645 [2024-11-20 00:00:24.774323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.645 [2024-11-20 00:00:24.774356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.645 qpair failed and we were unable to recover it. 00:35:50.645 [2024-11-20 00:00:24.774446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.649 [2024-11-20 00:00:24.774472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.649 qpair failed and we were unable to recover it. 00:35:50.649 [2024-11-20 00:00:24.774563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.649 [2024-11-20 00:00:24.774588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.649 qpair failed and we were unable to recover it. 00:35:50.649 [2024-11-20 00:00:24.774679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.649 [2024-11-20 00:00:24.774706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.649 qpair failed and we were unable to recover it. 00:35:50.649 [2024-11-20 00:00:24.774806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.649 [2024-11-20 00:00:24.774834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.649 qpair failed and we were unable to recover it. 00:35:50.649 [2024-11-20 00:00:24.774956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.774984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.650 qpair failed and we were unable to recover it. 00:35:50.650 [2024-11-20 00:00:24.775078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.775107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.650 qpair failed and we were unable to recover it. 00:35:50.650 [2024-11-20 00:00:24.775212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.775240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.650 qpair failed and we were unable to recover it. 00:35:50.650 [2024-11-20 00:00:24.775357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.775386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.650 qpair failed and we were unable to recover it. 00:35:50.650 [2024-11-20 00:00:24.775488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.775516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.650 qpair failed and we were unable to recover it. 00:35:50.650 [2024-11-20 00:00:24.775609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.775637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.650 qpair failed and we were unable to recover it. 00:35:50.650 [2024-11-20 00:00:24.775738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.775764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.650 qpair failed and we were unable to recover it. 00:35:50.650 [2024-11-20 00:00:24.775856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.775883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.650 qpair failed and we were unable to recover it. 00:35:50.650 [2024-11-20 00:00:24.775976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.776002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.650 qpair failed and we were unable to recover it. 00:35:50.650 [2024-11-20 00:00:24.776094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.776122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.650 qpair failed and we were unable to recover it. 00:35:50.650 [2024-11-20 00:00:24.776209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.776236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.650 qpair failed and we were unable to recover it. 00:35:50.650 [2024-11-20 00:00:24.776318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.776345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.650 qpair failed and we were unable to recover it. 00:35:50.650 [2024-11-20 00:00:24.776455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.650 [2024-11-20 00:00:24.776482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.651 qpair failed and we were unable to recover it. 00:35:50.651 [2024-11-20 00:00:24.776596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.653 [2024-11-20 00:00:24.776623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.654 qpair failed and we were unable to recover it. 00:35:50.654 [2024-11-20 00:00:24.776738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.654 [2024-11-20 00:00:24.776764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.654 qpair failed and we were unable to recover it. 00:35:50.654 [2024-11-20 00:00:24.776854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.654 [2024-11-20 00:00:24.776885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.654 qpair failed and we were unable to recover it. 00:35:50.654 [2024-11-20 00:00:24.777003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.654 [2024-11-20 00:00:24.777029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.654 qpair failed and we were unable to recover it. 00:35:50.654 [2024-11-20 00:00:24.777133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.654 [2024-11-20 00:00:24.777160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.654 qpair failed and we were unable to recover it. 00:35:50.654 [2024-11-20 00:00:24.777265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.654 [2024-11-20 00:00:24.777290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.654 qpair failed and we were unable to recover it. 00:35:50.654 [2024-11-20 00:00:24.777411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.654 [2024-11-20 00:00:24.777437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.654 qpair failed and we were unable to recover it. 00:35:50.654 [2024-11-20 00:00:24.777534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.654 [2024-11-20 00:00:24.777561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.654 qpair failed and we were unable to recover it. 00:35:50.654 [2024-11-20 00:00:24.777685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.654 [2024-11-20 00:00:24.777711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.654 qpair failed and we were unable to recover it. 00:35:50.655 [2024-11-20 00:00:24.777798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.655 [2024-11-20 00:00:24.777824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.655 qpair failed and we were unable to recover it. 00:35:50.655 [2024-11-20 00:00:24.777944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.655 [2024-11-20 00:00:24.777970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.655 qpair failed and we were unable to recover it. 00:35:50.655 [2024-11-20 00:00:24.778058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.655 [2024-11-20 00:00:24.778094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.655 qpair failed and we were unable to recover it. 00:35:50.655 [2024-11-20 00:00:24.778183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.655 [2024-11-20 00:00:24.778209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.655 qpair failed and we were unable to recover it. 00:35:50.655 [2024-11-20 00:00:24.778304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.655 [2024-11-20 00:00:24.778329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.655 qpair failed and we were unable to recover it. 00:35:50.655 [2024-11-20 00:00:24.778416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.655 [2024-11-20 00:00:24.778442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.655 qpair failed and we were unable to recover it. 00:35:50.655 [2024-11-20 00:00:24.778526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.655 [2024-11-20 00:00:24.778552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.655 qpair failed and we were unable to recover it. 00:35:50.655 [2024-11-20 00:00:24.778675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.656 [2024-11-20 00:00:24.778701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.656 qpair failed and we were unable to recover it. 00:35:50.656 [2024-11-20 00:00:24.778816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.656 [2024-11-20 00:00:24.778858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.656 qpair failed and we were unable to recover it. 00:35:50.656 [2024-11-20 00:00:24.778966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.656 [2024-11-20 00:00:24.779000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.656 qpair failed and we were unable to recover it. 00:35:50.656 [2024-11-20 00:00:24.779107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.656 [2024-11-20 00:00:24.779157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.656 qpair failed and we were unable to recover it. 00:35:50.656 [2024-11-20 00:00:24.779254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.656 [2024-11-20 00:00:24.779285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.656 qpair failed and we were unable to recover it. 00:35:50.656 [2024-11-20 00:00:24.779394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.656 [2024-11-20 00:00:24.779422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.656 qpair failed and we were unable to recover it. 00:35:50.656 [2024-11-20 00:00:24.779532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.656 [2024-11-20 00:00:24.779559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.656 qpair failed and we were unable to recover it. 00:35:50.656 [2024-11-20 00:00:24.779651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.656 [2024-11-20 00:00:24.779679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.656 qpair failed and we were unable to recover it. 00:35:50.656 [2024-11-20 00:00:24.779764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.656 [2024-11-20 00:00:24.779790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.656 qpair failed and we were unable to recover it. 00:35:50.656 [2024-11-20 00:00:24.779878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.656 [2024-11-20 00:00:24.779907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.657 qpair failed and we were unable to recover it. 00:35:50.657 [2024-11-20 00:00:24.780002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.657 [2024-11-20 00:00:24.780029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.657 qpair failed and we were unable to recover it. 00:35:50.657 [2024-11-20 00:00:24.780136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.657 [2024-11-20 00:00:24.780164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.657 qpair failed and we were unable to recover it. 00:35:50.657 [2024-11-20 00:00:24.780288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.657 [2024-11-20 00:00:24.780315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.657 qpair failed and we were unable to recover it. 00:35:50.657 [2024-11-20 00:00:24.780417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.657 [2024-11-20 00:00:24.780449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.657 qpair failed and we were unable to recover it. 00:35:50.657 [2024-11-20 00:00:24.780536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.657 [2024-11-20 00:00:24.780563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.657 qpair failed and we were unable to recover it. 00:35:50.657 [2024-11-20 00:00:24.780650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.657 [2024-11-20 00:00:24.780678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.657 qpair failed and we were unable to recover it. 00:35:50.657 [2024-11-20 00:00:24.780792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.657 [2024-11-20 00:00:24.780818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.657 qpair failed and we were unable to recover it. 00:35:50.657 [2024-11-20 00:00:24.780916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.657 [2024-11-20 00:00:24.780942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.657 qpair failed and we were unable to recover it. 00:35:50.657 [2024-11-20 00:00:24.781028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.657 [2024-11-20 00:00:24.781055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.657 qpair failed and we were unable to recover it. 00:35:50.657 [2024-11-20 00:00:24.781182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.657 [2024-11-20 00:00:24.781210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.657 qpair failed and we were unable to recover it. 00:35:50.657 [2024-11-20 00:00:24.781303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.657 [2024-11-20 00:00:24.781331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.781432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.781459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.781547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.781575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.781677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.781705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.781790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.781816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.781919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.781946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.782098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.782126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.782222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.782250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.782343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.782369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.782452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.782479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.782595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.782623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.782713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.782750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.782845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.782872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.782992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.658 [2024-11-20 00:00:24.783021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.658 qpair failed and we were unable to recover it. 00:35:50.658 [2024-11-20 00:00:24.783132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.659 [2024-11-20 00:00:24.783162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.659 qpair failed and we were unable to recover it. 00:35:50.659 [2024-11-20 00:00:24.783263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.659 [2024-11-20 00:00:24.783289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.659 qpair failed and we were unable to recover it. 00:35:50.659 [2024-11-20 00:00:24.783395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.660 [2024-11-20 00:00:24.783421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.660 qpair failed and we were unable to recover it. 00:35:50.660 [2024-11-20 00:00:24.783518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.660 [2024-11-20 00:00:24.783545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.660 qpair failed and we were unable to recover it. 00:35:50.660 [2024-11-20 00:00:24.783669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.660 [2024-11-20 00:00:24.783695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.660 qpair failed and we were unable to recover it. 00:35:50.660 [2024-11-20 00:00:24.783784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.660 [2024-11-20 00:00:24.783814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.660 qpair failed and we were unable to recover it. 00:35:50.660 [2024-11-20 00:00:24.783903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.660 [2024-11-20 00:00:24.783934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.660 qpair failed and we were unable to recover it. 00:35:50.660 [2024-11-20 00:00:24.784036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.660 [2024-11-20 00:00:24.784062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.660 qpair failed and we were unable to recover it. 00:35:50.660 [2024-11-20 00:00:24.784200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.660 [2024-11-20 00:00:24.784227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.660 qpair failed and we were unable to recover it. 00:35:50.660 [2024-11-20 00:00:24.784310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.660 [2024-11-20 00:00:24.784336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.660 qpair failed and we were unable to recover it. 00:35:50.660 [2024-11-20 00:00:24.784434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.660 [2024-11-20 00:00:24.784460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.660 qpair failed and we were unable to recover it. 00:35:50.660 [2024-11-20 00:00:24.784551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.660 [2024-11-20 00:00:24.784580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.660 qpair failed and we were unable to recover it. 00:35:50.660 [2024-11-20 00:00:24.784674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.660 [2024-11-20 00:00:24.784703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.660 qpair failed and we were unable to recover it. 00:35:50.660 [2024-11-20 00:00:24.784817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.784844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.784939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.784966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.785056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.785093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.785226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.785253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.785340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.785367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.785484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.785512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.785606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.785633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.785757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.785785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.785886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.785915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.786086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.786113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.786203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.786231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.786321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.786349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.786442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.786469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.786598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.786625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.786727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.786754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.786845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.786872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.786962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.786990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.787133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.661 [2024-11-20 00:00:24.787166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.661 qpair failed and we were unable to recover it. 00:35:50.661 [2024-11-20 00:00:24.787258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-11-20 00:00:24.787285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.662 qpair failed and we were unable to recover it. 00:35:50.662 [2024-11-20 00:00:24.787372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-11-20 00:00:24.787399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.662 qpair failed and we were unable to recover it. 00:35:50.662 [2024-11-20 00:00:24.787517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-11-20 00:00:24.787544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.662 qpair failed and we were unable to recover it. 00:35:50.662 [2024-11-20 00:00:24.787639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-11-20 00:00:24.787668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.662 qpair failed and we were unable to recover it. 00:35:50.662 [2024-11-20 00:00:24.787761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-11-20 00:00:24.787787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.662 qpair failed and we were unable to recover it. 00:35:50.662 [2024-11-20 00:00:24.787880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-11-20 00:00:24.787907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.662 qpair failed and we were unable to recover it. 00:35:50.662 [2024-11-20 00:00:24.788000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-11-20 00:00:24.788027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.662 qpair failed and we were unable to recover it. 00:35:50.662 [2024-11-20 00:00:24.788133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-11-20 00:00:24.788161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.662 qpair failed and we were unable to recover it. 00:35:50.662 [2024-11-20 00:00:24.788255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.662 [2024-11-20 00:00:24.788281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.662 qpair failed and we were unable to recover it. 00:35:50.662 [2024-11-20 00:00:24.788381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.664 [2024-11-20 00:00:24.788407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.664 qpair failed and we were unable to recover it. 00:35:50.664 [2024-11-20 00:00:24.788520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.665 [2024-11-20 00:00:24.788546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.665 qpair failed and we were unable to recover it. 00:35:50.665 [2024-11-20 00:00:24.788633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.665 [2024-11-20 00:00:24.788660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.665 qpair failed and we were unable to recover it. 00:35:50.665 [2024-11-20 00:00:24.788753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.665 [2024-11-20 00:00:24.788779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.665 qpair failed and we were unable to recover it. 00:35:50.665 [2024-11-20 00:00:24.788891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.665 [2024-11-20 00:00:24.788920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.665 qpair failed and we were unable to recover it. 00:35:50.665 [2024-11-20 00:00:24.789037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.665 [2024-11-20 00:00:24.789065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.665 qpair failed and we were unable to recover it. 00:35:50.665 [2024-11-20 00:00:24.789175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.665 [2024-11-20 00:00:24.789208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.665 qpair failed and we were unable to recover it. 00:35:50.665 [2024-11-20 00:00:24.789298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.665 [2024-11-20 00:00:24.789325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.665 qpair failed and we were unable to recover it. 00:35:50.665 [2024-11-20 00:00:24.789408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.666 [2024-11-20 00:00:24.789435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.666 qpair failed and we were unable to recover it. 00:35:50.666 [2024-11-20 00:00:24.789519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.666 [2024-11-20 00:00:24.789546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.666 qpair failed and we were unable to recover it. 00:35:50.666 [2024-11-20 00:00:24.789630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.666 [2024-11-20 00:00:24.789659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.666 qpair failed and we were unable to recover it. 00:35:50.666 [2024-11-20 00:00:24.789775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.666 [2024-11-20 00:00:24.789802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.666 qpair failed and we were unable to recover it. 00:35:50.666 [2024-11-20 00:00:24.789903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.666 [2024-11-20 00:00:24.789930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.666 qpair failed and we were unable to recover it. 00:35:50.666 [2024-11-20 00:00:24.790024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.666 [2024-11-20 00:00:24.790051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.666 qpair failed and we were unable to recover it. 00:35:50.666 [2024-11-20 00:00:24.790151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.666 [2024-11-20 00:00:24.790178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.666 qpair failed and we were unable to recover it. 00:35:50.666 [2024-11-20 00:00:24.790267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.666 [2024-11-20 00:00:24.790294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.666 qpair failed and we were unable to recover it. 00:35:50.666 [2024-11-20 00:00:24.790384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.666 [2024-11-20 00:00:24.790411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.666 qpair failed and we were unable to recover it. 00:35:50.666 [2024-11-20 00:00:24.790499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.666 [2024-11-20 00:00:24.790536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.666 qpair failed and we were unable to recover it. 00:35:50.666 [2024-11-20 00:00:24.790622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.666 [2024-11-20 00:00:24.790649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.667 qpair failed and we were unable to recover it. 00:35:50.667 [2024-11-20 00:00:24.790739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.667 [2024-11-20 00:00:24.790767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.667 qpair failed and we were unable to recover it. 00:35:50.667 [2024-11-20 00:00:24.790863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.667 [2024-11-20 00:00:24.790891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.667 qpair failed and we were unable to recover it. 00:35:50.667 [2024-11-20 00:00:24.790984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.667 [2024-11-20 00:00:24.791009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.667 qpair failed and we were unable to recover it. 00:35:50.667 [2024-11-20 00:00:24.791124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.667 [2024-11-20 00:00:24.791151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.667 qpair failed and we were unable to recover it. 00:35:50.667 [2024-11-20 00:00:24.791246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.667 [2024-11-20 00:00:24.791273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.667 qpair failed and we were unable to recover it. 00:35:50.667 [2024-11-20 00:00:24.791375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.667 [2024-11-20 00:00:24.791402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.667 qpair failed and we were unable to recover it. 00:35:50.667 [2024-11-20 00:00:24.791492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.667 [2024-11-20 00:00:24.791518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.667 qpair failed and we were unable to recover it. 00:35:50.667 [2024-11-20 00:00:24.791639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.667 [2024-11-20 00:00:24.791667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.667 qpair failed and we were unable to recover it. 00:35:50.667 [2024-11-20 00:00:24.791758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.667 [2024-11-20 00:00:24.791785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.668 qpair failed and we were unable to recover it. 00:35:50.669 [2024-11-20 00:00:24.791886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.669 [2024-11-20 00:00:24.791915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.669 qpair failed and we were unable to recover it. 00:35:50.669 [2024-11-20 00:00:24.791997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.670 [2024-11-20 00:00:24.792025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.670 qpair failed and we were unable to recover it. 00:35:50.670 [2024-11-20 00:00:24.792150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.670 [2024-11-20 00:00:24.792177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.670 qpair failed and we were unable to recover it. 00:35:50.670 [2024-11-20 00:00:24.792267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.670 [2024-11-20 00:00:24.792293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.670 qpair failed and we were unable to recover it. 00:35:50.670 [2024-11-20 00:00:24.792388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.670 [2024-11-20 00:00:24.792415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.670 qpair failed and we were unable to recover it. 00:35:50.670 [2024-11-20 00:00:24.792525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.671 [2024-11-20 00:00:24.792558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.671 qpair failed and we were unable to recover it. 00:35:50.671 [2024-11-20 00:00:24.792651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.671 [2024-11-20 00:00:24.792679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.671 qpair failed and we were unable to recover it. 00:35:50.671 [2024-11-20 00:00:24.792766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.671 [2024-11-20 00:00:24.792794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.671 qpair failed and we were unable to recover it. 00:35:50.671 [2024-11-20 00:00:24.792920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.671 [2024-11-20 00:00:24.792946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.671 qpair failed and we were unable to recover it. 00:35:50.671 [2024-11-20 00:00:24.793037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.671 [2024-11-20 00:00:24.793063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.671 qpair failed and we were unable to recover it. 00:35:50.671 [2024-11-20 00:00:24.793177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.671 [2024-11-20 00:00:24.793203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.671 qpair failed and we were unable to recover it. 00:35:50.671 [2024-11-20 00:00:24.793291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.671 [2024-11-20 00:00:24.793317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.671 qpair failed and we were unable to recover it. 00:35:50.671 [2024-11-20 00:00:24.793411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.671 [2024-11-20 00:00:24.793439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.671 qpair failed and we were unable to recover it. 00:35:50.671 [2024-11-20 00:00:24.793562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.671 [2024-11-20 00:00:24.793589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.671 qpair failed and we were unable to recover it. 00:35:50.671 [2024-11-20 00:00:24.793689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.671 [2024-11-20 00:00:24.793716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.671 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.793845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.793872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.793962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.793991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.794129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.794157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.794280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.794307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.794400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.794428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.794543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.794570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.794692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.794721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.794822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.794850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.794941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.794968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.795057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.795092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.795181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.795209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.795300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.795327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.795423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.795449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.795548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.795576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.795668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.795699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.795783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.795810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.795910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.795951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.796066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.796109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.672 qpair failed and we were unable to recover it. 00:35:50.672 [2024-11-20 00:00:24.796252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.672 [2024-11-20 00:00:24.796280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.796387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.796414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.796523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.796551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.796644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.796671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.796798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.796825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.796941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.796968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.797057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.797090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.797189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.797217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.797307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.797335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.797473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.797501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.797619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.797646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.797739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.797766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.797862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.797895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.797987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.798015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.798136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.798176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.798269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.798297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.798392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.798418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.798527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.798553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.798643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.798670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.798785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.798811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.798896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.798921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.799018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.799045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.799150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.799180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.799284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.799312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.799448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.799474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.799571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.799598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.799696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.799728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.799819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.799847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.799975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.800003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.800094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.800122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.800207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.800233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.800342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.800369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.800472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.800499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.800586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.800623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.800726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.800755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.800849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.800876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.800969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.800996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.801088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.801116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.801204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.801231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.801323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.801355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.801446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.801474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.801573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.801599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.801688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.801713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.801803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.801829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.801920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.801947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.802037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.802063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.802175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.802200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.802291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.802319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.802416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.802442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.802528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.802554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.673 [2024-11-20 00:00:24.802658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.673 [2024-11-20 00:00:24.802699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.673 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.802803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.802831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.802928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.802956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.803067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.803102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.803204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.803230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.803319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.803348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.803442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.803470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.803561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.803587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.803679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.803709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.803799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.803825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.803946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.803972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.804082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.804112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.804237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.804264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.804362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.804390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.804487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.804515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.804606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.804633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.804730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.804762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.804845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.804872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.804996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.805022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.805122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.805149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.805228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.805254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.805337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.805364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.805485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.805512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.805597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.805625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.805725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.805754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.805856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.805883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.805969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.805996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.806089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.806117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.806215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.806242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.806359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.806387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.806523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.806552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.806648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.806675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.806769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.806797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.806892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.806918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.807005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.807031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.807140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.807167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.807255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.807281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.807374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.807400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.807493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.807519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.807612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.807638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.807729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.807755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.807846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.807874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.807987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.808016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.808172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.808208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.808308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.808335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.808430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.808457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.808574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.808600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.808690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.808716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.808811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.808836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.808922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.808949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.809040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.809075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.809174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.809201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.809303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.809331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.809447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.809474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.809577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.809604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.809693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.809721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.809815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.809843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.809944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.809970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.810076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.810103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.810197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.810223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.810317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.810343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.810431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.810457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.810543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.810569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.674 [2024-11-20 00:00:24.810668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.674 [2024-11-20 00:00:24.810693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.674 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.810782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.810811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.810904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.810931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.811026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.811053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.811155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.811183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.811285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.811326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.811442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.811479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.811567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.811594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.811710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.811742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.811840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.811866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.811953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.811979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.812059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.812114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.812215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.812241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.812330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.812356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.812461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.812490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.812585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.812612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.812713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.812740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.812861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.812888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.812978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.813005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.813141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.813171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.813281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.813313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.813414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.813441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.813563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.813589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.813676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.813701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.813826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.813851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.813933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.813959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.814049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.814091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.814180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.814207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.814300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.814326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.814459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.814485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.814575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.814602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.814716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.814745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.814843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.814882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.815008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.815037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.815155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.815182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.815272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.815298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.815392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.815418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.815535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.815561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.815652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.815678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.815765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.815791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.815893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.815918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.816012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.816039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.816168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.816196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.816292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.816318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.816404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.816431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.816521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.816548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.816666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.816692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.816821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.816853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.816949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.816975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.817079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.817110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.817205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.817232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.817325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.817356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.817442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.817469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.817561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.817588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.817681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.817707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.817796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.817822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.817914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.817941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.818061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.818097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.818217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.818244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.818335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.675 [2024-11-20 00:00:24.818362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.675 qpair failed and we were unable to recover it. 00:35:50.675 [2024-11-20 00:00:24.818452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.818478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.818583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.818612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.818726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.818753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.818856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.818883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.819000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.819026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.819121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.819149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.819236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.819262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.819352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.819379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.819468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.819495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.819618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.819644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.819768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.819796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.819928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.819968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.820081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.820110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.820194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.820220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.820307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.820338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.820461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.820487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.820579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.820606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.820690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.820716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.820804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.820830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.820922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.820950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.821045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.821078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.821177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.821205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.821354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.821381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.821465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.821491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.821582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.821608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.821709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.821737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.821859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.821885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.822001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.822028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.822134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.822163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.822258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.822284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.822404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.822430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.822523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.822548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.822641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.822669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.822759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.822786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.822904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.822932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.823053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.823086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.823174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.823201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.823291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.823318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.823413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.823440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.823527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.823554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.823644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.823670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.823774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.823801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.823890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.823916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.824004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.824031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.824131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.824158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.824259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.824286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.824407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.824435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.824523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.824550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.824648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.824675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.824774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.824801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.824896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.824924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.825048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.825084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.825173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.825199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.825291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.825318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.825434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.825465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.825581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.825607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.676 [2024-11-20 00:00:24.825695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.676 [2024-11-20 00:00:24.825722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.676 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.825810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.825836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.825931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.825958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.826040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.826065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.826161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.826187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.826300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.826326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.826413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.826439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.826527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.826554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.826634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.826659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.826739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.826764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.826850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.826876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.826986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.827011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.827124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.827153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.827251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.827278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.827389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.827418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.827541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.827567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.827662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.827689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.827775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.827802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.827949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.827976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.828080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.828106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.828198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.828224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.828305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.828331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.828452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.828478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.828572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.828600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.828699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.828727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.828857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.828890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.828984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.829012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.829144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.829172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.829262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.829289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.829376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.829402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.829487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.829514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.829663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.829689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.829779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.829805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.829902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.829928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.830020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.830046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.830143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.830170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.830259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.830287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.830388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.830415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.830530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.830556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.830654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.830681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.830779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.830806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.830895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.830921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.831039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.831066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.831172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.831200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.831290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.831316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.831416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.831443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.831532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.831559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.831645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.831673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.831777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.831816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.831912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.831940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.832033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.832060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.832161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.832188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.832272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.832304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.832437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.832465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.832556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.832583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.832676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.677 [2024-11-20 00:00:24.832703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.677 qpair failed and we were unable to recover it. 00:35:50.677 [2024-11-20 00:00:24.832789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.832816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.832905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.832932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.833024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.833050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.833148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.833174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.833264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.833290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.833413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.833439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.833532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.833558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.833647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.833675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.833766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.833793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.833891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.833919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.834011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.834038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.834134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.834160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.834253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.834280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.834408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.834434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.834525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.834551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.834644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.834670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.834757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.834783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.834905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.834931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.835027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.835055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.835156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.835183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.835280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.835306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.835403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.835430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.835524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.835552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.835694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.835730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.835819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.835846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.835938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.835964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.836052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.836085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.836171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.836197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.836297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.836325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.836416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.836443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.836521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.836548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.836634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.836660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.836780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.836806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.836892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.836919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.837034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.837061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.837186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.837212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.837298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.837324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.837420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.837446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.837534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.837560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.837643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.837669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.837753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.837781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.837872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.837898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.838026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.838062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.838165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.838192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.838309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.838336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.838435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.838462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.838562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.838588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.838722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.838748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.838842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.838869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.838956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.838984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.839104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.839132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.839249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.839276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.839379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.839406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.839499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.839527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.839616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.839643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.678 qpair failed and we were unable to recover it. 00:35:50.678 [2024-11-20 00:00:24.839765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.678 [2024-11-20 00:00:24.839793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.839920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.839947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.840035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.840074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.840159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.840186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.840311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.840337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.840460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.840486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.840572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.840598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.840714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.840740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.840830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.840862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.840958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.840985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.841089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.841116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.841196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.841222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.841308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.841334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.841459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.841486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.841575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.841601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.841695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.841723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.841809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.841836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.841926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.841954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.842075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.842103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.842222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.842248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.842334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.842360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.842448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.842475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.842573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.842599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.842683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.842709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.842794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.842820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.842934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.842960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.843038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.843064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.843161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.843188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.843293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.843319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.843403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.843429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.843512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.843538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.843659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.843685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.843770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.843796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.843883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.843909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.844003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.844028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.844132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.844178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.844272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.844301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.844396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.844422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.844512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.844538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.844657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.844686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.844783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.844811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.844896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.844923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.845013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.845039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.845144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.845170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.845257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.845283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.845367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.845392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.845486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.845511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.845599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.845627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.845709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.845736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.845829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.845855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.845948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.845976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.846081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.846110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.846206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.846233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.846323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.846350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.679 [2024-11-20 00:00:24.846449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.679 [2024-11-20 00:00:24.846475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.679 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.846595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.846620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.846711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.846738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.846834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.846861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.846987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.847013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.847103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.847130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.847245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.847272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.847359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.847386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.847479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.847509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.847591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.847618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.847710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.847736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.847821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.847847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.847928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.847954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.848055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.848087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.848177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.848205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.848289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.848316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.848442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.848468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.848560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.848586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.848700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.848726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.848824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.848852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.848946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.848972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.849063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.849101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.849228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.849255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.849347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.849373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.849458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.849484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.849605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.849632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.849723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.849749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.849834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.849861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.849945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.849971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.850057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.850092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.850182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.850208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.850298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.850324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.850409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.850435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.850550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.850576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.850667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.850693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.850781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.850809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.850908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.850934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.851076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.851104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.851193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.851219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.851305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.851332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.851453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.851479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.851574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.851602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.851693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.851726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.851819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.851846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.851930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.851957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.852083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.852110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.852220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.852248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.852333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.852359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.852476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.852508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.852596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.852623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.852714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.852740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.852833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.852860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.852951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.852978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.853103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.853130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.853222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.853249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.853343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.853370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.853479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.680 [2024-11-20 00:00:24.853505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.680 qpair failed and we were unable to recover it. 00:35:50.680 [2024-11-20 00:00:24.853587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.853613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.853708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.853735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.853853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.853880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.853995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.854035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.854143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.854172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.854262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.854289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.854374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.854400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.854520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.854547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.854633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.854659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.854738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.854763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.854847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.854872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.854968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.854997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.855086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.855114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.855208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.855234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.681 [2024-11-20 00:00:24.855332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.681 [2024-11-20 00:00:24.855359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.681 qpair failed and we were unable to recover it. 00:35:50.951 [2024-11-20 00:00:24.855441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.951 [2024-11-20 00:00:24.855467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.951 qpair failed and we were unable to recover it. 00:35:50.951 [2024-11-20 00:00:24.855586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.855612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.855703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.855731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.855829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.855863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.855964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.855992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.856082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.856109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.856202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.856229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.856317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.856344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.856436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.856462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.856544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.856571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.856676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.856702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.856821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.856846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.856929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.856954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.857038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.857064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.857196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.857223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.857310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.857338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:50.952 [2024-11-20 00:00:24.857429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.857460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:50.952 [2024-11-20 00:00:24.857552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.857578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:50.952 [2024-11-20 00:00:24.857691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.857718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:50.952 [2024-11-20 00:00:24.857804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.857830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.952 [2024-11-20 00:00:24.857953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.857981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.858067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.858100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.858216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.858242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.858346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.858372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.858454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.858480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.858566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.858591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.858704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.858731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.858875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.858903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.859026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.859052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.859152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.859179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.859264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.859293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.859413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.859440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.859531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.859559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.859645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.859672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.859757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.859783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.859901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.952 [2024-11-20 00:00:24.859939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.952 qpair failed and we were unable to recover it. 00:35:50.952 [2024-11-20 00:00:24.860032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.860059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.860162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.860188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.860277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.860304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.860434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.860460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.860554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.860579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.860707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.860739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.860860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.860888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.861017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.861054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.861170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.861196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.861310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.861337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.861437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.861464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.861560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.861587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.861697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.861724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.861819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.861845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.861928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.861955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.862043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.862077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.862169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.862195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.862285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.862311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.862447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.862473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.862574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.862600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.862696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.862722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.862806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.862834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.862984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.863010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.863113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.863140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.863261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.863288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.863383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.863409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.863521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.863547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.863657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.863684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.863766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.863793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.863879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.863907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.864003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.864030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.864140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.864167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.864263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.864294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.864442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.864468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.864555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.864581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.864668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.864694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.864839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.864877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.953 [2024-11-20 00:00:24.864969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.953 [2024-11-20 00:00:24.864995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.953 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.865094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.865122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.865206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.865235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.865321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.865348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.865467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.865493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.865573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.865600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.865689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.865716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.865836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.865862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.865961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.865990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.866098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.866125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.866217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.866243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.866328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.866354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.866450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.866476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.866567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.866593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.866715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.866740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.866825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.866851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.866949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.866975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.867083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.867110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.867196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.867222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.867314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.867340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.867463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.867489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.867578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.867604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.867691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.867721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.867816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.867844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.867931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.867957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.868050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.868094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.868181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.868207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.868324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.868349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.868451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.868477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.868571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.868599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.868690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.868717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.868835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.868862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.868979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.869006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.869112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.869139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.954 [2024-11-20 00:00:24.869230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.954 [2024-11-20 00:00:24.869257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.954 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.869350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.869376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.869480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.869515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.869601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.869628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.869728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.869754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.869835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.869861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.869952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.869978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.870076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.870105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.870221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.870249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.870342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.870374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.870472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.870499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.870583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.870610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.870727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.870754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.870900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.870927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.871011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.871038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.871134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.871167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.871291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.871318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.871441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.871469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.871569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.871597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.871684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.871712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.871822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.871847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.871951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.871977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.872099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.872125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.872212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.872237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.872332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.872358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.872451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.872479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.872611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.872637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.872754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.872783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.872899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.872926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.873018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.873046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.873169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.873196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.873286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.873313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.873413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.873440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.873526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.873553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.873641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.873668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.873780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.873806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.873900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.873927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.874014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.874039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.955 qpair failed and we were unable to recover it. 00:35:50.955 [2024-11-20 00:00:24.874172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.955 [2024-11-20 00:00:24.874198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.874286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.874312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.874412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.874440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.874556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.874583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.874685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.874719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.874815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.874841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.874959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.874985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.875082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.875121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.875209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.875236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.875321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.875347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.875437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.875463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.875559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.875589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:50.956 [2024-11-20 00:00:24.875688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.875715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:50.956 [2024-11-20 00:00:24.875800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.875829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.956 [2024-11-20 00:00:24.875918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.875945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.956 [2024-11-20 00:00:24.876040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.876084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.876187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.876215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.876361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.876388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.876480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.876506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.876626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.876654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.876742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.876769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.876862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.876888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.876970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.876996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.877097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.877146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.877237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.877263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.877343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.877368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.877484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.877510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.877599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.877625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.877741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.877766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.877860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.956 [2024-11-20 00:00:24.877888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.956 qpair failed and we were unable to recover it. 00:35:50.956 [2024-11-20 00:00:24.877979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.878005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.878114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.878143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.878232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.878260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.878377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.878403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.878494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.878521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.878606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.878633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.878719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.878745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.878860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.878885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.878980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.879006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.879111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.879138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.879226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.879251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.879334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.879360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.879486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.879512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.879602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.879627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.879739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.879765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.879883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.879909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.880011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.880036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.880162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.880188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.880278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.880304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.880451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.880476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.880565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.880591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.880677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.880703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.880839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.880878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.881021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.881063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.881176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.881204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.881300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.881328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.881442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.881472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.881595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.881622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.881725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.881752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.881839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.881865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.881951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.881977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.882067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.882098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.882185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.882211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.882303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.882329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.882418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.882444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.957 qpair failed and we were unable to recover it. 00:35:50.957 [2024-11-20 00:00:24.882561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.957 [2024-11-20 00:00:24.882586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.882683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.882709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.882791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.882817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.882939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.882968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.883082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.883111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.883208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.883235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.883322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.883349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.883476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.883503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.883618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.883646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.883771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.883797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.883891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.883916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.884010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.884049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.884184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.884212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.884305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.884332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.884448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.884475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.884571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.884597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.884687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.884713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.884799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.884826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.884955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.884984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.885109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.885136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.885247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.885286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.885402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.885431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.885583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.885609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.885699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.885725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.885854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.885882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.885972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.885998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.886100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.886127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.886216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.886244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.886336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.886362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.886453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.886479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.886591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.958 [2024-11-20 00:00:24.886617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.958 qpair failed and we were unable to recover it. 00:35:50.958 [2024-11-20 00:00:24.886703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.886733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.886826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.886855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.886991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.887019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.887130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.887157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.887248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.887274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.887399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.887425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.887514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.887540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.887628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.887654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.887768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.887795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.887889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.887917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.888002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.888029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.888138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.888165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.888254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.888281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.888381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.888408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.888548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.888574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.888707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.888732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.888823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.888848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.888965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.888991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.889088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.889115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.889202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.889227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.889324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.889351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.889477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.889504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.889590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.889617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.889731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.889758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.889859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.889887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.889990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.890019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.890115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.890142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.890230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.890262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.890405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.890431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.890577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.890603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.890702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.890728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.890816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.890845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.890968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.890994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.891114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.891141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.891265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.891292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.891389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.891415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.891566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.891592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.959 [2024-11-20 00:00:24.891682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.959 [2024-11-20 00:00:24.891708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.959 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.891800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.891828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.891915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.891942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.892057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.892089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.892186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.892213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.892302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.892328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.892454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.892480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.892623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.892650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.892740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.892766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.892865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.892904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.893003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.893031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.893136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.893163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.893256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.893282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.893395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.893434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.893543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.893571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.893660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.893688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.893786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.893814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.893926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.893966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.894095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.894123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.894220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.894247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.894338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.894364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.894484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.894510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.894656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.894683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.894768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.894795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.894888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.894919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.895023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.895082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.895220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.895248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.895360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.895386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.895475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.895501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.895620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.895646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.895738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.895770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.895890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.895919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.896024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.896052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.896147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.896174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.896265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.896292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.896418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.896444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.896539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.896567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.896657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.896685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.960 [2024-11-20 00:00:24.896834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.960 [2024-11-20 00:00:24.896860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.960 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.896950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.896977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.897067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.897101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.897196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.897223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.897339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.897369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.897472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.897500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.897634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.897662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.897750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.897777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.897875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.897903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.898026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.898053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.898154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.898182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.898276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.898304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.898460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.898486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.898573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.898599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.898687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.898713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.898809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.898835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.898946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.898972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.899052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.899084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.899206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.899232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.899318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.899349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.899469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.899498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.899587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.899613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.899707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.899736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.899827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.899855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.899972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.900000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.900103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.900130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.900259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.900286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.900386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.900413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.900505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.900533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.900649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.900677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.900778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.900803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.900889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.900915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.901026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.961 [2024-11-20 00:00:24.901052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.961 qpair failed and we were unable to recover it. 00:35:50.961 [2024-11-20 00:00:24.901157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.901183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.901274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.901303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.901431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.901459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.901551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.901577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.901721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.901747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.901830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.901856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.901948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.901975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.902066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.902110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.902235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.902263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.902367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.902393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.902508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.902535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.902624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.902651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.902744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.902772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.902865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.902893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.902978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.903005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.903120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.903148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.903268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.903294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.903394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.903421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.903511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.903538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.903629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.903657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.903744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.903771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.903888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.903915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.903999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.904026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.904150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.904177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.904274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.904301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.904404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.904430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.904511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.904542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.904658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.904684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.904798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.904823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.904921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.904960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.905064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.905099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.905195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.905221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.905341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.905367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.905518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.905544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.905627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.905652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.905750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.905777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.905885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.905911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.905998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.962 [2024-11-20 00:00:24.906024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.962 qpair failed and we were unable to recover it. 00:35:50.962 [2024-11-20 00:00:24.906131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.906158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.906253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.906279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.906410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.906438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.906524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.906551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.906647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.906674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.906784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.906811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.906931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.906959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.907059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.907095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.907178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.907205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.907319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.907345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.907470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.907496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.907617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.907643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.907732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.907760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.907882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.907910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.908002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.908028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.908125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.908156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.908273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.908299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.908393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.908419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.908511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.908538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.908634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.908660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.908745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.908771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.908861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.908887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.908989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.909014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.909117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.909143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.909238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.909264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.909347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.909373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.909467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.909493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.909579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.909607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.909692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.909719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.909850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.909890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.910015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.910042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.910137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.910164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.910249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.910275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.910363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.910388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.910492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.910518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.910619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.910647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.910757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.910784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.910904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.963 [2024-11-20 00:00:24.910930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.963 qpair failed and we were unable to recover it. 00:35:50.963 [2024-11-20 00:00:24.911015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.911041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.911137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.911165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.911259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.911286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.911413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.911440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.911553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.911583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.911706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.911733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.911835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.911862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.911970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.912010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.912125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.912155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.912278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.912305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.912428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.912455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.912554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.912581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.912699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.912725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.912811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.912838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.912962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.912988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.913107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.913134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.913225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.913252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.913347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.913373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.913465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.913491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.913611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.913639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.913730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.913756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.913835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.913862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.913962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.913990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.914096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.914124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 Malloc0 00:35:50.964 [2024-11-20 00:00:24.914222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.914250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.914372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.914400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.914498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.914526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.964 [2024-11-20 00:00:24.914660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.914687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:50.964 [2024-11-20 00:00:24.914777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.914804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.964 [2024-11-20 00:00:24.914905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.914931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.964 [2024-11-20 00:00:24.915059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.915094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.915233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.915259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.915357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.915389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.915503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.915529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.915618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.915644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.915745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.964 [2024-11-20 00:00:24.915771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.964 qpair failed and we were unable to recover it. 00:35:50.964 [2024-11-20 00:00:24.915898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.915924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.916043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.916088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.916209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.916236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.916332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.916375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.916472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.916500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.916626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.916654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.916744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.916770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.916865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.916892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.917035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.917078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.917177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.917203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.917282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.917308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.917406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.917432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.917515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.917541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.917663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.917691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.917783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.917810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.917933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.917960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.965 [2024-11-20 00:00:24.917951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.918045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.918083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.918175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.918203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.918325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.918363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.918483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.918509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.918615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.918641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.918737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.918763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.918847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.918873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.918968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.918994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.919095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.919122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.919222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.919248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.919371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.919397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.919485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.919513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.919597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.919626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.919724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.919751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.919866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.919893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.919981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.920007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.920114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.920142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.920273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.920305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.920394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.920420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.965 [2024-11-20 00:00:24.920532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.965 [2024-11-20 00:00:24.920558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.965 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.920647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.920673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.920783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.920823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.920956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.920984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.921107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.921134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.921226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.921253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.921349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.921378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.921499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.921526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.921627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.921653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.921742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.921768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.921871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.921911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.922011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.922039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.922162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.922190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.922284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.922311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.922434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.922461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.922558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.922585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.922678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.922705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.922812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.922851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.922981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.923010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.923107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.923136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.923255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.923282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.923401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.923429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.923526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.923554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.923640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.923666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.923760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.923786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.923904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.923931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.924049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.924082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.924201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.924228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.924318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.924344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.924469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.924495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.924602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.924629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.924751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.924778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.924867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.924893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.966 [2024-11-20 00:00:24.924986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.966 [2024-11-20 00:00:24.925012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.966 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.925141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.925169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.925266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.925291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.925391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.925417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.925506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.925532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.925622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.925648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.925803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.925829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.925924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.925950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.926031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.926057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.967 [2024-11-20 00:00:24.926171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.926198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:50.967 [2024-11-20 00:00:24.926318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.926347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.926433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.967 [2024-11-20 00:00:24.926459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.926551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.967 [2024-11-20 00:00:24.926577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.926662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.926688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.926780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.926806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.926907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.926933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.927023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.927048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.927154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.927199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.927307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.927336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.927434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.927463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.927583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.927610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.927698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.927726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.927809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.927836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.927917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.927945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.928041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.928067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.928162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.928188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.928278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.928304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.928396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.928422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.928509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.928535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.928616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.928642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.928756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.928782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.928870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.928896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.929014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.929042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.929148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.929176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.929287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.929326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.929425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.929454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.929548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.967 [2024-11-20 00:00:24.929575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.967 qpair failed and we were unable to recover it. 00:35:50.967 [2024-11-20 00:00:24.929661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.929687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.929777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.929805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.929902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.929927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.930015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.930042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.930140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.930167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.930255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.930281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.930377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.930404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.930493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.930522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.930621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.930648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.930764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.930791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.930881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.930908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.931026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.931053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.931165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.931192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.931289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.931318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.931465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.931492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.931611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.931639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.931727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.931754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.931871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.931898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.931989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.932017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.932108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.932135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.932259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.932289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.932406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.932432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.932547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.932573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.932668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.932694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.932839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.932867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.932962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.932991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.933113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.933140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.933260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.933287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.933370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.933396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.933492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.933519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.933613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.933641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.933733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.933762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.933887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.933914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.934035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.934062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.934162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.934189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 [2024-11-20 00:00:24.934279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.934305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:50.968 [2024-11-20 00:00:24.934393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.968 [2024-11-20 00:00:24.934420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.968 qpair failed and we were unable to recover it. 00:35:50.968 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.969 [2024-11-20 00:00:24.934515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.934541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.969 [2024-11-20 00:00:24.934632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.934658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.934742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.934769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.934873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.934912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.935043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.935080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.935176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.935202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.935294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.935320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.935417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.935443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.935534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.935567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.935692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.935720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.935812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.935839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.935999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.936031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.936126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.936154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.936247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.936274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.936390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.936417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.936538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.936566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.936696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.936723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.936810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.936838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.936959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.936987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.937102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.937130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.937216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.937243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.937331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.937357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.937449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.937475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.937595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.937622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.937705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.937732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.937854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.937880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.938000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.938027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.938147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.938174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.938269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.938295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.938386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.938413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.938534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.938560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.938656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.938682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.938771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.938797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.938885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.938910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.939028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.939054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.939153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.939184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.939268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.939293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.969 qpair failed and we were unable to recover it. 00:35:50.969 [2024-11-20 00:00:24.939387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.969 [2024-11-20 00:00:24.939412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.939527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.939553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.939639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.939666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.939760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.939786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.939867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.939892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.940051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.940098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.940206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.940234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.940355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.940382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.940499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.940525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.940621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.940647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.940739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.940768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.940863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.940889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.940984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.941010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.941096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.941123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.941215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.941241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.941328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.941354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.941441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.941467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.941581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.941607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.941696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.941722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.941812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.941840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.941991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.942029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6068000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.942153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.942188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.970 [2024-11-20 00:00:24.942284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.942311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.942406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:50.970 [2024-11-20 00:00:24.942433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.942562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.970 [2024-11-20 00:00:24.942590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.942681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.942708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.970 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.942801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.942830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.942944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.942970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.943066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.943098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.943187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.943213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.943304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.970 [2024-11-20 00:00:24.943330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.970 qpair failed and we were unable to recover it. 00:35:50.970 [2024-11-20 00:00:24.943445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.943471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.943556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.943582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.943681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.943707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.943792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.943820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.943906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.943932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.944036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.944062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.944162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.944188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.944275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.944301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.944390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.944415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6070000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.944514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.944542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.944663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.944689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.944780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.944806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.944890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.944916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.945041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.945075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.945167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.945193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.945281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.945307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.945392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.945420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.945508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.945535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.945653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.945680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6064000b90 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.945784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.945823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.945948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.971 [2024-11-20 00:00:24.945975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x129cb40 with addr=10.0.0.2, port=4420 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.946424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.971 [2024-11-20 00:00:24.948819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.971 [2024-11-20 00:00:24.948930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.971 [2024-11-20 00:00:24.948958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.971 [2024-11-20 00:00:24.948974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.971 [2024-11-20 00:00:24.948989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.971 [2024-11-20 00:00:24.949038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.971 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:50.971 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.971 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.971 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.971 00:00:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 342425 00:35:50.971 [2024-11-20 00:00:24.958608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.971 [2024-11-20 00:00:24.958706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.971 [2024-11-20 00:00:24.958733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.971 [2024-11-20 00:00:24.958748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.971 [2024-11-20 00:00:24.958761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.971 [2024-11-20 00:00:24.958792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.968630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.971 [2024-11-20 00:00:24.968731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.971 [2024-11-20 00:00:24.968760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.971 [2024-11-20 00:00:24.968775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.971 [2024-11-20 00:00:24.968789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.971 [2024-11-20 00:00:24.968826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.978637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.971 [2024-11-20 00:00:24.978742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.971 [2024-11-20 00:00:24.978768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.971 [2024-11-20 00:00:24.978783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.971 [2024-11-20 00:00:24.978797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.971 [2024-11-20 00:00:24.978827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.971 [2024-11-20 00:00:24.988680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.971 [2024-11-20 00:00:24.988781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.971 [2024-11-20 00:00:24.988807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.971 [2024-11-20 00:00:24.988822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.971 [2024-11-20 00:00:24.988836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.971 [2024-11-20 00:00:24.988867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.971 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:24.998735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:24.998869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:24.998895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:24.998909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:24.998923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:24.998954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:25.008614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:25.008706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:25.008732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:25.008748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:25.008761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:25.008792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:25.018713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:25.018813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:25.018840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:25.018854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:25.018868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:25.018899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:25.028759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:25.028868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:25.028894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:25.028909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:25.028923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:25.028954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:25.038815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:25.038902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:25.038928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:25.038943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:25.038956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:25.038987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:25.048768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:25.048859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:25.048885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:25.048900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:25.048913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:25.048944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:25.058774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:25.058867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:25.058898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:25.058914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:25.058927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:25.058957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:25.068811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:25.068909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:25.068938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:25.068954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:25.068967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:25.068998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:25.078799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:25.078892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:25.078918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:25.078932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:25.078946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:25.078978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:25.088848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:25.088938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:25.088964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:25.088979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:25.088993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:25.089037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:25.098908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:25.099006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:25.099032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:25.099047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:25.099076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:25.099110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:25.108899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:25.108989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:25.109015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:25.109030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:25.109043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:25.109082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.972 [2024-11-20 00:00:25.118918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.972 [2024-11-20 00:00:25.119010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.972 [2024-11-20 00:00:25.119035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.972 [2024-11-20 00:00:25.119050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.972 [2024-11-20 00:00:25.119064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.972 [2024-11-20 00:00:25.119103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.972 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.128945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.129037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.129063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.129086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.129101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.129133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.138987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.139088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.139114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.139129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.139142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.139174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.149006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.149153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.149180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.149194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.149208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.149238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.159042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.159144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.159171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.159185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.159198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.159230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.169137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.169273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.169299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.169315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.169329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.169359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.179112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.179211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.179236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.179251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.179264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.179296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.189137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.189236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.189268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.189283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.189296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.189328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.199211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.199304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.199330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.199344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.199357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.199388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.209320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.209411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.209437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.209452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.209466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.209496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.219295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.219407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.219433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.219447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.219461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.219492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.229274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.229361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.229386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.229400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.229419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.229469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.239424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.239519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.239544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.239559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.239572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.239602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:50.973 [2024-11-20 00:00:25.249316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.973 [2024-11-20 00:00:25.249458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.973 [2024-11-20 00:00:25.249483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.973 [2024-11-20 00:00:25.249498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.973 [2024-11-20 00:00:25.249512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:50.973 [2024-11-20 00:00:25.249542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:50.973 qpair failed and we were unable to recover it. 00:35:51.234 [2024-11-20 00:00:25.259366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.234 [2024-11-20 00:00:25.259502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.234 [2024-11-20 00:00:25.259530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.234 [2024-11-20 00:00:25.259545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.234 [2024-11-20 00:00:25.259559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.234 [2024-11-20 00:00:25.259590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.234 qpair failed and we were unable to recover it. 00:35:51.234 [2024-11-20 00:00:25.269364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.234 [2024-11-20 00:00:25.269453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.234 [2024-11-20 00:00:25.269478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.234 [2024-11-20 00:00:25.269493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.234 [2024-11-20 00:00:25.269506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.234 [2024-11-20 00:00:25.269538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.234 qpair failed and we were unable to recover it. 00:35:51.234 [2024-11-20 00:00:25.279422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.234 [2024-11-20 00:00:25.279511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.234 [2024-11-20 00:00:25.279537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.234 [2024-11-20 00:00:25.279551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.234 [2024-11-20 00:00:25.279565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.234 [2024-11-20 00:00:25.279595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.234 qpair failed and we were unable to recover it. 00:35:51.234 [2024-11-20 00:00:25.289432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.234 [2024-11-20 00:00:25.289535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.234 [2024-11-20 00:00:25.289561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.234 [2024-11-20 00:00:25.289576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.234 [2024-11-20 00:00:25.289589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.234 [2024-11-20 00:00:25.289620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.234 qpair failed and we were unable to recover it. 00:35:51.234 [2024-11-20 00:00:25.299460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.234 [2024-11-20 00:00:25.299583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.234 [2024-11-20 00:00:25.299609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.234 [2024-11-20 00:00:25.299623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.234 [2024-11-20 00:00:25.299636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.234 [2024-11-20 00:00:25.299667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.234 qpair failed and we were unable to recover it. 00:35:51.234 [2024-11-20 00:00:25.309539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.234 [2024-11-20 00:00:25.309638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.234 [2024-11-20 00:00:25.309667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.234 [2024-11-20 00:00:25.309683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.234 [2024-11-20 00:00:25.309697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.309728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.319512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.319602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.319634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.319649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.319664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.319695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.329545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.329631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.329657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.329671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.329685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.329716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.339578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.339671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.339696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.339711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.339725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.339755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.349633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.349743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.349772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.349787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.349801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.349832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.359632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.359726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.359751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.359773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.359787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.359819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.369762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.369896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.369921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.369936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.369950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.369980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.379724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.379826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.379852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.379866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.379880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.379911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.389721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.389812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.389837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.389851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.389864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.389895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.399757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.399877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.399903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.399918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.399931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.399970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.409764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.409865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.409892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.409907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.409921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.409954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.419794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.419893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.419922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.419939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.419952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.419983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.429831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.429936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.429963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.429977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.429991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.430023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.235 qpair failed and we were unable to recover it. 00:35:51.235 [2024-11-20 00:00:25.439857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.235 [2024-11-20 00:00:25.439993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.235 [2024-11-20 00:00:25.440019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.235 [2024-11-20 00:00:25.440034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.235 [2024-11-20 00:00:25.440047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.235 [2024-11-20 00:00:25.440086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.236 qpair failed and we were unable to recover it. 00:35:51.236 [2024-11-20 00:00:25.449900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.236 [2024-11-20 00:00:25.449992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.236 [2024-11-20 00:00:25.450018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.236 [2024-11-20 00:00:25.450033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.236 [2024-11-20 00:00:25.450046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.236 [2024-11-20 00:00:25.450087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.236 qpair failed and we were unable to recover it. 00:35:51.236 [2024-11-20 00:00:25.459946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.236 [2024-11-20 00:00:25.460086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.236 [2024-11-20 00:00:25.460112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.236 [2024-11-20 00:00:25.460127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.236 [2024-11-20 00:00:25.460141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.236 [2024-11-20 00:00:25.460172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.236 qpair failed and we were unable to recover it. 00:35:51.236 [2024-11-20 00:00:25.469943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.236 [2024-11-20 00:00:25.470084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.236 [2024-11-20 00:00:25.470120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.236 [2024-11-20 00:00:25.470135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.236 [2024-11-20 00:00:25.470148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.236 [2024-11-20 00:00:25.470180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.236 qpair failed and we were unable to recover it. 00:35:51.236 [2024-11-20 00:00:25.479952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.236 [2024-11-20 00:00:25.480042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.236 [2024-11-20 00:00:25.480084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.236 [2024-11-20 00:00:25.480100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.236 [2024-11-20 00:00:25.480114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.236 [2024-11-20 00:00:25.480144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.236 qpair failed and we were unable to recover it. 00:35:51.236 [2024-11-20 00:00:25.489987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.236 [2024-11-20 00:00:25.490106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.236 [2024-11-20 00:00:25.490132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.236 [2024-11-20 00:00:25.490155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.236 [2024-11-20 00:00:25.490170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.236 [2024-11-20 00:00:25.490203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.236 qpair failed and we were unable to recover it. 00:35:51.236 [2024-11-20 00:00:25.500060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.236 [2024-11-20 00:00:25.500193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.236 [2024-11-20 00:00:25.500218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.236 [2024-11-20 00:00:25.500233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.236 [2024-11-20 00:00:25.500246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.236 [2024-11-20 00:00:25.500277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.236 qpair failed and we were unable to recover it. 00:35:51.236 [2024-11-20 00:00:25.510055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.236 [2024-11-20 00:00:25.510187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.236 [2024-11-20 00:00:25.510212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.236 [2024-11-20 00:00:25.510227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.236 [2024-11-20 00:00:25.510240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.236 [2024-11-20 00:00:25.510271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.236 qpair failed and we were unable to recover it. 00:35:51.236 [2024-11-20 00:00:25.520126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.236 [2024-11-20 00:00:25.520266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.236 [2024-11-20 00:00:25.520292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.236 [2024-11-20 00:00:25.520307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.236 [2024-11-20 00:00:25.520320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.236 [2024-11-20 00:00:25.520365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.236 qpair failed and we were unable to recover it. 00:35:51.236 [2024-11-20 00:00:25.530169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.236 [2024-11-20 00:00:25.530300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.236 [2024-11-20 00:00:25.530326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.236 [2024-11-20 00:00:25.530341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.236 [2024-11-20 00:00:25.530355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.236 [2024-11-20 00:00:25.530395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.236 qpair failed and we were unable to recover it. 00:35:51.236 [2024-11-20 00:00:25.540159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.236 [2024-11-20 00:00:25.540270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.236 [2024-11-20 00:00:25.540295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.236 [2024-11-20 00:00:25.540310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.236 [2024-11-20 00:00:25.540323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.236 [2024-11-20 00:00:25.540355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.236 qpair failed and we were unable to recover it. 00:35:51.498 [2024-11-20 00:00:25.550220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.498 [2024-11-20 00:00:25.550319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.498 [2024-11-20 00:00:25.550345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.498 [2024-11-20 00:00:25.550371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.498 [2024-11-20 00:00:25.550385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.498 [2024-11-20 00:00:25.550416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.498 qpair failed and we were unable to recover it. 00:35:51.498 [2024-11-20 00:00:25.560193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.498 [2024-11-20 00:00:25.560310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.498 [2024-11-20 00:00:25.560336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.498 [2024-11-20 00:00:25.560350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.498 [2024-11-20 00:00:25.560362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.498 [2024-11-20 00:00:25.560392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.498 qpair failed and we were unable to recover it. 00:35:51.498 [2024-11-20 00:00:25.570269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.498 [2024-11-20 00:00:25.570362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.498 [2024-11-20 00:00:25.570387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.498 [2024-11-20 00:00:25.570402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.498 [2024-11-20 00:00:25.570416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.498 [2024-11-20 00:00:25.570445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.498 qpair failed and we were unable to recover it. 00:35:51.498 [2024-11-20 00:00:25.580329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.498 [2024-11-20 00:00:25.580468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.498 [2024-11-20 00:00:25.580493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.498 [2024-11-20 00:00:25.580508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.498 [2024-11-20 00:00:25.580521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.498 [2024-11-20 00:00:25.580552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.498 qpair failed and we were unable to recover it. 00:35:51.498 [2024-11-20 00:00:25.590326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.498 [2024-11-20 00:00:25.590441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.498 [2024-11-20 00:00:25.590467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.498 [2024-11-20 00:00:25.590481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.498 [2024-11-20 00:00:25.590495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.498 [2024-11-20 00:00:25.590526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.498 qpair failed and we were unable to recover it. 00:35:51.498 [2024-11-20 00:00:25.600288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.498 [2024-11-20 00:00:25.600393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.498 [2024-11-20 00:00:25.600418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.498 [2024-11-20 00:00:25.600432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.498 [2024-11-20 00:00:25.600445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.498 [2024-11-20 00:00:25.600476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.498 qpair failed and we were unable to recover it. 00:35:51.498 [2024-11-20 00:00:25.610336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.498 [2024-11-20 00:00:25.610464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.498 [2024-11-20 00:00:25.610489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.498 [2024-11-20 00:00:25.610503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.498 [2024-11-20 00:00:25.610516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.498 [2024-11-20 00:00:25.610545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.498 qpair failed and we were unable to recover it. 00:35:51.498 [2024-11-20 00:00:25.620398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.498 [2024-11-20 00:00:25.620504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.498 [2024-11-20 00:00:25.620535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.498 [2024-11-20 00:00:25.620550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.498 [2024-11-20 00:00:25.620564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.498 [2024-11-20 00:00:25.620596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.498 qpair failed and we were unable to recover it. 00:35:51.498 [2024-11-20 00:00:25.630423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.498 [2024-11-20 00:00:25.630526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.498 [2024-11-20 00:00:25.630552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.498 [2024-11-20 00:00:25.630567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.498 [2024-11-20 00:00:25.630581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.498 [2024-11-20 00:00:25.630624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.498 qpair failed and we were unable to recover it. 00:35:51.498 [2024-11-20 00:00:25.640478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.498 [2024-11-20 00:00:25.640566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.498 [2024-11-20 00:00:25.640592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.498 [2024-11-20 00:00:25.640607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.498 [2024-11-20 00:00:25.640620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.498 [2024-11-20 00:00:25.640664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.498 qpair failed and we were unable to recover it. 00:35:51.498 [2024-11-20 00:00:25.650443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.498 [2024-11-20 00:00:25.650529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.498 [2024-11-20 00:00:25.650555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.498 [2024-11-20 00:00:25.650569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.498 [2024-11-20 00:00:25.650583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.498 [2024-11-20 00:00:25.650626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.498 qpair failed and we were unable to recover it. 00:35:51.498 [2024-11-20 00:00:25.660543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.498 [2024-11-20 00:00:25.660669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.498 [2024-11-20 00:00:25.660695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.660709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.660729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.660761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.670516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.670638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.670663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.670678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.670691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.670722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.680578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.680683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.680709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.680723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.680737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.680768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.690544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.690635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.690661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.690676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.690689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.690720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.700573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.700667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.700692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.700706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.700720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.700751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.710635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.710725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.710750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.710766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.710780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.710810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.720637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.720727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.720753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.720767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.720780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.720811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.730676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.730764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.730789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.730804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.730817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.730848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.740723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.740834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.740859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.740874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.740887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.740917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.750919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.751023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.751054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.751077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.751093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.751124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.760832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.760938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.760963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.760977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.760991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.761022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.770846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.770965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.770991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.771006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.771019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.771049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.780894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.781035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.781061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.499 [2024-11-20 00:00:25.781087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.499 [2024-11-20 00:00:25.781102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.499 [2024-11-20 00:00:25.781132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.499 qpair failed and we were unable to recover it. 00:35:51.499 [2024-11-20 00:00:25.790894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.499 [2024-11-20 00:00:25.790987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.499 [2024-11-20 00:00:25.791012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.500 [2024-11-20 00:00:25.791026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.500 [2024-11-20 00:00:25.791049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.500 [2024-11-20 00:00:25.791088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.500 qpair failed and we were unable to recover it. 00:35:51.500 [2024-11-20 00:00:25.800871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.500 [2024-11-20 00:00:25.801006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.500 [2024-11-20 00:00:25.801031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.500 [2024-11-20 00:00:25.801046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.500 [2024-11-20 00:00:25.801060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.500 [2024-11-20 00:00:25.801101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.500 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-20 00:00:25.810910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.763 [2024-11-20 00:00:25.811005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.763 [2024-11-20 00:00:25.811031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.763 [2024-11-20 00:00:25.811046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.763 [2024-11-20 00:00:25.811059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.763 [2024-11-20 00:00:25.811099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-20 00:00:25.820990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.763 [2024-11-20 00:00:25.821110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.763 [2024-11-20 00:00:25.821136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.763 [2024-11-20 00:00:25.821150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.763 [2024-11-20 00:00:25.821163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.763 [2024-11-20 00:00:25.821194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-20 00:00:25.830967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.763 [2024-11-20 00:00:25.831052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.763 [2024-11-20 00:00:25.831085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.763 [2024-11-20 00:00:25.831101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.763 [2024-11-20 00:00:25.831115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.763 [2024-11-20 00:00:25.831144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-20 00:00:25.841024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.763 [2024-11-20 00:00:25.841171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.763 [2024-11-20 00:00:25.841196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.763 [2024-11-20 00:00:25.841211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.763 [2024-11-20 00:00:25.841224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.763 [2024-11-20 00:00:25.841255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-20 00:00:25.851014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.763 [2024-11-20 00:00:25.851101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.763 [2024-11-20 00:00:25.851127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.763 [2024-11-20 00:00:25.851142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.763 [2024-11-20 00:00:25.851156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.763 [2024-11-20 00:00:25.851186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-20 00:00:25.861081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.763 [2024-11-20 00:00:25.861178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.763 [2024-11-20 00:00:25.861207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.763 [2024-11-20 00:00:25.861223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.763 [2024-11-20 00:00:25.861237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.763 [2024-11-20 00:00:25.861269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-20 00:00:25.871116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.763 [2024-11-20 00:00:25.871208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.763 [2024-11-20 00:00:25.871235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.763 [2024-11-20 00:00:25.871249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.763 [2024-11-20 00:00:25.871263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.763 [2024-11-20 00:00:25.871294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-20 00:00:25.881158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.763 [2024-11-20 00:00:25.881251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.763 [2024-11-20 00:00:25.881277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.763 [2024-11-20 00:00:25.881292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.763 [2024-11-20 00:00:25.881305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.763 [2024-11-20 00:00:25.881337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-20 00:00:25.891217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.763 [2024-11-20 00:00:25.891306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.763 [2024-11-20 00:00:25.891332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.763 [2024-11-20 00:00:25.891347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.763 [2024-11-20 00:00:25.891360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.763 [2024-11-20 00:00:25.891391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-20 00:00:25.901156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.763 [2024-11-20 00:00:25.901249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.763 [2024-11-20 00:00:25.901275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.763 [2024-11-20 00:00:25.901289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.763 [2024-11-20 00:00:25.901302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.763 [2024-11-20 00:00:25.901335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.763 qpair failed and we were unable to recover it. 00:35:51.763 [2024-11-20 00:00:25.911202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.763 [2024-11-20 00:00:25.911294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.763 [2024-11-20 00:00:25.911319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.763 [2024-11-20 00:00:25.911334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:25.911347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:25.911392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:25.921216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:25.921309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:25.921335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:25.921356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:25.921370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:25.921401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:25.931248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:25.931340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:25.931367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:25.931381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:25.931394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:25.931426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:25.941431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:25.941546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:25.941572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:25.941587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:25.941600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:25.941630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:25.951342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:25.951442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:25.951472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:25.951487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:25.951501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:25.951532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:25.961361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:25.961462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:25.961492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:25.961507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:25.961520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:25.961559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:25.971359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:25.971446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:25.971472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:25.971487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:25.971500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:25.971532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:25.981433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:25.981527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:25.981553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:25.981568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:25.981581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:25.981613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:25.991436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:25.991525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:25.991552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:25.991566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:25.991580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:25.991611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:26.001433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:26.001520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:26.001546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:26.001560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:26.001574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:26.001605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:26.011465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:26.011556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:26.011582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:26.011597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:26.011611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:26.011655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:26.021637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:26.021763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:26.021788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:26.021802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:26.021816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:26.021847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:26.031545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:26.031634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:26.031660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:26.031674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.764 [2024-11-20 00:00:26.031688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.764 [2024-11-20 00:00:26.031719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.764 qpair failed and we were unable to recover it. 00:35:51.764 [2024-11-20 00:00:26.041592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.764 [2024-11-20 00:00:26.041685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.764 [2024-11-20 00:00:26.041710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.764 [2024-11-20 00:00:26.041725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.765 [2024-11-20 00:00:26.041738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.765 [2024-11-20 00:00:26.041769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-20 00:00:26.051619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.765 [2024-11-20 00:00:26.051712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.765 [2024-11-20 00:00:26.051740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.765 [2024-11-20 00:00:26.051762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.765 [2024-11-20 00:00:26.051777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.765 [2024-11-20 00:00:26.051810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.765 qpair failed and we were unable to recover it. 00:35:51.765 [2024-11-20 00:00:26.061612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.765 [2024-11-20 00:00:26.061705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.765 [2024-11-20 00:00:26.061731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.765 [2024-11-20 00:00:26.061746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.765 [2024-11-20 00:00:26.061759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:51.765 [2024-11-20 00:00:26.061790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:51.765 qpair failed and we were unable to recover it. 00:35:52.028 [2024-11-20 00:00:26.071685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.028 [2024-11-20 00:00:26.071778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.028 [2024-11-20 00:00:26.071804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.028 [2024-11-20 00:00:26.071819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.028 [2024-11-20 00:00:26.071832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.028 [2024-11-20 00:00:26.071863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.028 qpair failed and we were unable to recover it. 00:35:52.028 [2024-11-20 00:00:26.081759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.028 [2024-11-20 00:00:26.081863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.028 [2024-11-20 00:00:26.081922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.028 [2024-11-20 00:00:26.081946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.028 [2024-11-20 00:00:26.081963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.028 [2024-11-20 00:00:26.082014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.028 qpair failed and we were unable to recover it. 00:35:52.028 [2024-11-20 00:00:26.091721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.028 [2024-11-20 00:00:26.091835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.028 [2024-11-20 00:00:26.091862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.028 [2024-11-20 00:00:26.091876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.028 [2024-11-20 00:00:26.091890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.028 [2024-11-20 00:00:26.091928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.028 qpair failed and we were unable to recover it. 00:35:52.028 [2024-11-20 00:00:26.101789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.028 [2024-11-20 00:00:26.101887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.028 [2024-11-20 00:00:26.101913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.028 [2024-11-20 00:00:26.101927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.028 [2024-11-20 00:00:26.101941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.028 [2024-11-20 00:00:26.101971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.028 qpair failed and we were unable to recover it. 00:35:52.028 [2024-11-20 00:00:26.111824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.028 [2024-11-20 00:00:26.111918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.028 [2024-11-20 00:00:26.111944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.028 [2024-11-20 00:00:26.111959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.028 [2024-11-20 00:00:26.111971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.028 [2024-11-20 00:00:26.112015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.028 qpair failed and we were unable to recover it. 00:35:52.028 [2024-11-20 00:00:26.121820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.028 [2024-11-20 00:00:26.121907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.028 [2024-11-20 00:00:26.121933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.121947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.121964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.121996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.131841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.131935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.131960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.131975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.131989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.132019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.141946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.142038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.142064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.142087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.142101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.142132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.151954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.152041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.152067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.152090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.152104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.152134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.161893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.161986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.162011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.162025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.162038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.162076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.171953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.172049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.172082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.172098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.172111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.172157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.181985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.182109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.182144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.182160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.182174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.182217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.192007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.192108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.192134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.192148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.192161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.192192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.202118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.202217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.202243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.202257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.202271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.202301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.212146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.212235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.212261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.212275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.212297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.212329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.222116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.222210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.222235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.222249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.222268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.222300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.232174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.232262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.232287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.232301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.232314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.232344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.242239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.242363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.242388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.242402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.029 [2024-11-20 00:00:26.242416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.029 [2024-11-20 00:00:26.242447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.029 qpair failed and we were unable to recover it. 00:35:52.029 [2024-11-20 00:00:26.252174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.029 [2024-11-20 00:00:26.252265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.029 [2024-11-20 00:00:26.252291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.029 [2024-11-20 00:00:26.252306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.030 [2024-11-20 00:00:26.252319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.030 [2024-11-20 00:00:26.252349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.030 qpair failed and we were unable to recover it. 00:35:52.030 [2024-11-20 00:00:26.262208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.030 [2024-11-20 00:00:26.262301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.030 [2024-11-20 00:00:26.262326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.030 [2024-11-20 00:00:26.262340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.030 [2024-11-20 00:00:26.262354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.030 [2024-11-20 00:00:26.262384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.030 qpair failed and we were unable to recover it. 00:35:52.030 [2024-11-20 00:00:26.272209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.030 [2024-11-20 00:00:26.272309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.030 [2024-11-20 00:00:26.272335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.030 [2024-11-20 00:00:26.272350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.030 [2024-11-20 00:00:26.272363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.030 [2024-11-20 00:00:26.272393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.030 qpair failed and we were unable to recover it. 00:35:52.030 [2024-11-20 00:00:26.282270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.030 [2024-11-20 00:00:26.282360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.030 [2024-11-20 00:00:26.282385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.030 [2024-11-20 00:00:26.282399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.030 [2024-11-20 00:00:26.282412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.030 [2024-11-20 00:00:26.282444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.030 qpair failed and we were unable to recover it. 00:35:52.030 [2024-11-20 00:00:26.292275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.030 [2024-11-20 00:00:26.292407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.030 [2024-11-20 00:00:26.292433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.030 [2024-11-20 00:00:26.292448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.030 [2024-11-20 00:00:26.292461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.030 [2024-11-20 00:00:26.292491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.030 qpair failed and we were unable to recover it. 00:35:52.030 [2024-11-20 00:00:26.302363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.030 [2024-11-20 00:00:26.302501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.030 [2024-11-20 00:00:26.302526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.030 [2024-11-20 00:00:26.302540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.030 [2024-11-20 00:00:26.302554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.030 [2024-11-20 00:00:26.302586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.030 qpair failed and we were unable to recover it. 00:35:52.030 [2024-11-20 00:00:26.312327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.030 [2024-11-20 00:00:26.312421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.030 [2024-11-20 00:00:26.312452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.030 [2024-11-20 00:00:26.312467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.030 [2024-11-20 00:00:26.312480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.030 [2024-11-20 00:00:26.312511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.030 qpair failed and we were unable to recover it. 00:35:52.030 [2024-11-20 00:00:26.322362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.030 [2024-11-20 00:00:26.322452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.030 [2024-11-20 00:00:26.322478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.030 [2024-11-20 00:00:26.322492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.030 [2024-11-20 00:00:26.322505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.030 [2024-11-20 00:00:26.322535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.030 qpair failed and we were unable to recover it. 00:35:52.030 [2024-11-20 00:00:26.332423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.030 [2024-11-20 00:00:26.332509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.030 [2024-11-20 00:00:26.332535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.030 [2024-11-20 00:00:26.332549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.030 [2024-11-20 00:00:26.332562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.030 [2024-11-20 00:00:26.332607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.030 qpair failed and we were unable to recover it. 00:35:52.292 [2024-11-20 00:00:26.342445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.292 [2024-11-20 00:00:26.342555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.292 [2024-11-20 00:00:26.342581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.292 [2024-11-20 00:00:26.342595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.292 [2024-11-20 00:00:26.342609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.292 [2024-11-20 00:00:26.342640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.292 qpair failed and we were unable to recover it. 00:35:52.292 [2024-11-20 00:00:26.352461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.292 [2024-11-20 00:00:26.352583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.292 [2024-11-20 00:00:26.352612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.292 [2024-11-20 00:00:26.352627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.292 [2024-11-20 00:00:26.352647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.292 [2024-11-20 00:00:26.352679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.292 qpair failed and we were unable to recover it. 00:35:52.292 [2024-11-20 00:00:26.362591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.292 [2024-11-20 00:00:26.362680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.292 [2024-11-20 00:00:26.362705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.292 [2024-11-20 00:00:26.362719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.292 [2024-11-20 00:00:26.362733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.292 [2024-11-20 00:00:26.362764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.292 qpair failed and we were unable to recover it. 00:35:52.292 [2024-11-20 00:00:26.372508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.292 [2024-11-20 00:00:26.372639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.292 [2024-11-20 00:00:26.372665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.292 [2024-11-20 00:00:26.372679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.292 [2024-11-20 00:00:26.372692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.292 [2024-11-20 00:00:26.372722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.292 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.382538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.382631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.382656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.382670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.382683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.382714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.392654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.392743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.392769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.392783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.392796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.392828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.402582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.402677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.402703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.402718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.402731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.402762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.412656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.412785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.412811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.412826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.412839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.412869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.422697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.422842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.422868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.422882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.422895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.422938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.432682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.432775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.432801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.432815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.432828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.432859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.442702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.442801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.442827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.442841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.442854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.442884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.452847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.452938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.452965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.452979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.452992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.453024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.462789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.462888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.462914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.462929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.462942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.462973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.472837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.472925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.472950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.472964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.472977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.473008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.482815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.482904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.482929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.482950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.482965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.482996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.492880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.492968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.492993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.493008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.493021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.493052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.502907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.503003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.503028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.503042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.503056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.503095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.512934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.513028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.513053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.513067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.513090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.293 [2024-11-20 00:00:26.513134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.293 qpair failed and we were unable to recover it. 00:35:52.293 [2024-11-20 00:00:26.522979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.293 [2024-11-20 00:00:26.523101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.293 [2024-11-20 00:00:26.523127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.293 [2024-11-20 00:00:26.523141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.293 [2024-11-20 00:00:26.523154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.294 [2024-11-20 00:00:26.523191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.294 qpair failed and we were unable to recover it. 00:35:52.294 [2024-11-20 00:00:26.532996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.294 [2024-11-20 00:00:26.533095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.294 [2024-11-20 00:00:26.533121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.294 [2024-11-20 00:00:26.533136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.294 [2024-11-20 00:00:26.533150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.294 [2024-11-20 00:00:26.533182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.294 qpair failed and we were unable to recover it. 00:35:52.294 [2024-11-20 00:00:26.543052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.294 [2024-11-20 00:00:26.543168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.294 [2024-11-20 00:00:26.543194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.294 [2024-11-20 00:00:26.543208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.294 [2024-11-20 00:00:26.543221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.294 [2024-11-20 00:00:26.543264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.294 qpair failed and we were unable to recover it. 00:35:52.294 [2024-11-20 00:00:26.553115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.294 [2024-11-20 00:00:26.553249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.294 [2024-11-20 00:00:26.553275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.294 [2024-11-20 00:00:26.553289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.294 [2024-11-20 00:00:26.553302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.294 [2024-11-20 00:00:26.553334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.294 qpair failed and we were unable to recover it. 00:35:52.294 [2024-11-20 00:00:26.563033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.294 [2024-11-20 00:00:26.563181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.294 [2024-11-20 00:00:26.563207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.294 [2024-11-20 00:00:26.563221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.294 [2024-11-20 00:00:26.563234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.294 [2024-11-20 00:00:26.563263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.294 qpair failed and we were unable to recover it. 00:35:52.294 [2024-11-20 00:00:26.573117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.294 [2024-11-20 00:00:26.573211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.294 [2024-11-20 00:00:26.573237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.294 [2024-11-20 00:00:26.573252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.294 [2024-11-20 00:00:26.573266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.294 [2024-11-20 00:00:26.573296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.294 qpair failed and we were unable to recover it. 00:35:52.294 [2024-11-20 00:00:26.583177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.294 [2024-11-20 00:00:26.583313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.294 [2024-11-20 00:00:26.583342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.294 [2024-11-20 00:00:26.583357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.294 [2024-11-20 00:00:26.583370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.294 [2024-11-20 00:00:26.583401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.294 qpair failed and we were unable to recover it. 00:35:52.294 [2024-11-20 00:00:26.593168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.294 [2024-11-20 00:00:26.593255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.294 [2024-11-20 00:00:26.593280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.294 [2024-11-20 00:00:26.593294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.294 [2024-11-20 00:00:26.593307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.294 [2024-11-20 00:00:26.593338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.294 qpair failed and we were unable to recover it. 00:35:52.553 [2024-11-20 00:00:26.603255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.553 [2024-11-20 00:00:26.603395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.553 [2024-11-20 00:00:26.603422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.553 [2024-11-20 00:00:26.603437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.553 [2024-11-20 00:00:26.603450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.553 [2024-11-20 00:00:26.603482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.553 qpair failed and we were unable to recover it. 00:35:52.553 [2024-11-20 00:00:26.613304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.553 [2024-11-20 00:00:26.613397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.553 [2024-11-20 00:00:26.613430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.553 [2024-11-20 00:00:26.613446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.553 [2024-11-20 00:00:26.613460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.553 [2024-11-20 00:00:26.613491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.553 qpair failed and we were unable to recover it. 00:35:52.553 [2024-11-20 00:00:26.623250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.553 [2024-11-20 00:00:26.623349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.553 [2024-11-20 00:00:26.623375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.553 [2024-11-20 00:00:26.623393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.553 [2024-11-20 00:00:26.623408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.553 [2024-11-20 00:00:26.623439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.553 qpair failed and we were unable to recover it. 00:35:52.553 [2024-11-20 00:00:26.633264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.553 [2024-11-20 00:00:26.633351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.553 [2024-11-20 00:00:26.633385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.553 [2024-11-20 00:00:26.633400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.553 [2024-11-20 00:00:26.633414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.553 [2024-11-20 00:00:26.633443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.553 qpair failed and we were unable to recover it. 00:35:52.553 [2024-11-20 00:00:26.643302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.553 [2024-11-20 00:00:26.643434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.553 [2024-11-20 00:00:26.643459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.553 [2024-11-20 00:00:26.643473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.553 [2024-11-20 00:00:26.643487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.553 [2024-11-20 00:00:26.643518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.553 qpair failed and we were unable to recover it. 00:35:52.553 [2024-11-20 00:00:26.653405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.553 [2024-11-20 00:00:26.653503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.553 [2024-11-20 00:00:26.653528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.553 [2024-11-20 00:00:26.653542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.553 [2024-11-20 00:00:26.653555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.553 [2024-11-20 00:00:26.653595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.553 qpair failed and we were unable to recover it. 00:35:52.553 [2024-11-20 00:00:26.663418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.553 [2024-11-20 00:00:26.663519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.553 [2024-11-20 00:00:26.663548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.553 [2024-11-20 00:00:26.663563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.553 [2024-11-20 00:00:26.663576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.553 [2024-11-20 00:00:26.663608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.553 qpair failed and we were unable to recover it. 00:35:52.553 [2024-11-20 00:00:26.673409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.553 [2024-11-20 00:00:26.673499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.553 [2024-11-20 00:00:26.673524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.553 [2024-11-20 00:00:26.673538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.553 [2024-11-20 00:00:26.673552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.553 [2024-11-20 00:00:26.673590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.553 qpair failed and we were unable to recover it. 00:35:52.553 [2024-11-20 00:00:26.683439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.683523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.683549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.683563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.683577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.683620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.693419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.693559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.693585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.693600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.693613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.693644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.703459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.703550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.703575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.703589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.703602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.703634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.713505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.713633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.713658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.713672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.713685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.713717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.723535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.723623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.723649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.723663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.723677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.723707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.733604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.733696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.733722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.733736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.733750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.733783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.743596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.743727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.743758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.743773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.743787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.743817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.753598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.753707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.753732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.753746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.753759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.753802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.763660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.763759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.763785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.763799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.763812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.763844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.773672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.773756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.773782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.773797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.773810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.773841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.783739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.783837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.783863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.783877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.783896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.783928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.793726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.793821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.793850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.793866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.793880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.793911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.803761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.554 [2024-11-20 00:00:26.803851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.554 [2024-11-20 00:00:26.803877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.554 [2024-11-20 00:00:26.803891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.554 [2024-11-20 00:00:26.803904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.554 [2024-11-20 00:00:26.803936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.554 qpair failed and we were unable to recover it. 00:35:52.554 [2024-11-20 00:00:26.813780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.555 [2024-11-20 00:00:26.813875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.555 [2024-11-20 00:00:26.813901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.555 [2024-11-20 00:00:26.813916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.555 [2024-11-20 00:00:26.813929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.555 [2024-11-20 00:00:26.813960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.555 qpair failed and we were unable to recover it. 00:35:52.555 [2024-11-20 00:00:26.823812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.555 [2024-11-20 00:00:26.823907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.555 [2024-11-20 00:00:26.823933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.555 [2024-11-20 00:00:26.823947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.555 [2024-11-20 00:00:26.823960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.555 [2024-11-20 00:00:26.823990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.555 qpair failed and we were unable to recover it. 00:35:52.555 [2024-11-20 00:00:26.833844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.555 [2024-11-20 00:00:26.833942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.555 [2024-11-20 00:00:26.833968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.555 [2024-11-20 00:00:26.833984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.555 [2024-11-20 00:00:26.833997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.555 [2024-11-20 00:00:26.834028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.555 qpair failed and we were unable to recover it. 00:35:52.555 [2024-11-20 00:00:26.843874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.555 [2024-11-20 00:00:26.843962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.555 [2024-11-20 00:00:26.843987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.555 [2024-11-20 00:00:26.844002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.555 [2024-11-20 00:00:26.844015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.555 [2024-11-20 00:00:26.844045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.555 qpair failed and we were unable to recover it. 00:35:52.555 [2024-11-20 00:00:26.853905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.555 [2024-11-20 00:00:26.854032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.555 [2024-11-20 00:00:26.854061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.555 [2024-11-20 00:00:26.854090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.555 [2024-11-20 00:00:26.854105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.555 [2024-11-20 00:00:26.854151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.555 qpair failed and we were unable to recover it. 00:35:52.814 [2024-11-20 00:00:26.863936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.814 [2024-11-20 00:00:26.864054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.814 [2024-11-20 00:00:26.864090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.814 [2024-11-20 00:00:26.864106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.814 [2024-11-20 00:00:26.864120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.814 [2024-11-20 00:00:26.864150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.814 qpair failed and we were unable to recover it. 00:35:52.814 [2024-11-20 00:00:26.873969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.814 [2024-11-20 00:00:26.874093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.814 [2024-11-20 00:00:26.874124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.814 [2024-11-20 00:00:26.874139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.814 [2024-11-20 00:00:26.874153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.814 [2024-11-20 00:00:26.874183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.814 qpair failed and we were unable to recover it. 00:35:52.814 [2024-11-20 00:00:26.884077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.814 [2024-11-20 00:00:26.884170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.814 [2024-11-20 00:00:26.884195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.814 [2024-11-20 00:00:26.884209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.814 [2024-11-20 00:00:26.884222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.814 [2024-11-20 00:00:26.884253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.814 qpair failed and we were unable to recover it. 00:35:52.814 [2024-11-20 00:00:26.893989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:26.894086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:26.894112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:26.894126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:26.894139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:26.894170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:26.904051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:26.904155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:26.904185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:26.904201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:26.904215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:26.904246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:26.914094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:26.914189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:26.914215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:26.914236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:26.914250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:26.914281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:26.924147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:26.924257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:26.924282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:26.924296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:26.924310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:26.924340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:26.934124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:26.934220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:26.934246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:26.934260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:26.934274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:26.934305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:26.944178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:26.944320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:26.944345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:26.944359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:26.944373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:26.944404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:26.954197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:26.954283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:26.954309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:26.954323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:26.954336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:26.954379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:26.964207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:26.964295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:26.964321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:26.964335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:26.964348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:26.964378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:26.974257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:26.974356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:26.974382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:26.974396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:26.974409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:26.974440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:26.984268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:26.984361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:26.984386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:26.984401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:26.984414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:26.984446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:26.994286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:26.994373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:26.994399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:26.994413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:26.994426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:26.994457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:27.004336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:27.004466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:27.004492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:27.004506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:27.004520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:27.004551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:27.014364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.815 [2024-11-20 00:00:27.014453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.815 [2024-11-20 00:00:27.014479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.815 [2024-11-20 00:00:27.014493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.815 [2024-11-20 00:00:27.014507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.815 [2024-11-20 00:00:27.014538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.815 qpair failed and we were unable to recover it. 00:35:52.815 [2024-11-20 00:00:27.024395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.816 [2024-11-20 00:00:27.024491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.816 [2024-11-20 00:00:27.024516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.816 [2024-11-20 00:00:27.024530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.816 [2024-11-20 00:00:27.024543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.816 [2024-11-20 00:00:27.024574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.816 qpair failed and we were unable to recover it. 00:35:52.816 [2024-11-20 00:00:27.034454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.816 [2024-11-20 00:00:27.034550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.816 [2024-11-20 00:00:27.034576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.816 [2024-11-20 00:00:27.034590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.816 [2024-11-20 00:00:27.034603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.816 [2024-11-20 00:00:27.034646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.816 qpair failed and we were unable to recover it. 00:35:52.816 [2024-11-20 00:00:27.044465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.816 [2024-11-20 00:00:27.044556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.816 [2024-11-20 00:00:27.044584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.816 [2024-11-20 00:00:27.044607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.816 [2024-11-20 00:00:27.044622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.816 [2024-11-20 00:00:27.044654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.816 qpair failed and we were unable to recover it. 00:35:52.816 [2024-11-20 00:00:27.054556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.816 [2024-11-20 00:00:27.054692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.816 [2024-11-20 00:00:27.054718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.816 [2024-11-20 00:00:27.054732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.816 [2024-11-20 00:00:27.054746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.816 [2024-11-20 00:00:27.054777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.816 qpair failed and we were unable to recover it. 00:35:52.816 [2024-11-20 00:00:27.064528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.816 [2024-11-20 00:00:27.064624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.816 [2024-11-20 00:00:27.064649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.816 [2024-11-20 00:00:27.064663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.816 [2024-11-20 00:00:27.064676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.816 [2024-11-20 00:00:27.064707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.816 qpair failed and we were unable to recover it. 00:35:52.816 [2024-11-20 00:00:27.074553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.816 [2024-11-20 00:00:27.074640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.816 [2024-11-20 00:00:27.074666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.816 [2024-11-20 00:00:27.074680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.816 [2024-11-20 00:00:27.074694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.816 [2024-11-20 00:00:27.074736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.816 qpair failed and we were unable to recover it. 00:35:52.816 [2024-11-20 00:00:27.084639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.816 [2024-11-20 00:00:27.084725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.816 [2024-11-20 00:00:27.084751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.816 [2024-11-20 00:00:27.084765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.816 [2024-11-20 00:00:27.084778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.816 [2024-11-20 00:00:27.084814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.816 qpair failed and we were unable to recover it. 00:35:52.816 [2024-11-20 00:00:27.094646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.816 [2024-11-20 00:00:27.094737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.816 [2024-11-20 00:00:27.094763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.816 [2024-11-20 00:00:27.094777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.816 [2024-11-20 00:00:27.094790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.816 [2024-11-20 00:00:27.094823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.816 qpair failed and we were unable to recover it. 00:35:52.816 [2024-11-20 00:00:27.104610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.816 [2024-11-20 00:00:27.104702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.816 [2024-11-20 00:00:27.104729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.816 [2024-11-20 00:00:27.104743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.816 [2024-11-20 00:00:27.104756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.816 [2024-11-20 00:00:27.104788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.816 qpair failed and we were unable to recover it. 00:35:52.816 [2024-11-20 00:00:27.114711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.816 [2024-11-20 00:00:27.114807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.816 [2024-11-20 00:00:27.114834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.816 [2024-11-20 00:00:27.114848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.816 [2024-11-20 00:00:27.114861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:52.816 [2024-11-20 00:00:27.114893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.816 qpair failed and we were unable to recover it. 00:35:53.077 [2024-11-20 00:00:27.124716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.077 [2024-11-20 00:00:27.124823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.077 [2024-11-20 00:00:27.124849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.077 [2024-11-20 00:00:27.124863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.077 [2024-11-20 00:00:27.124877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.077 [2024-11-20 00:00:27.124907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.077 qpair failed and we were unable to recover it. 00:35:53.077 [2024-11-20 00:00:27.134710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.077 [2024-11-20 00:00:27.134806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.077 [2024-11-20 00:00:27.134832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.077 [2024-11-20 00:00:27.134846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.077 [2024-11-20 00:00:27.134859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.077 [2024-11-20 00:00:27.134891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.077 qpair failed and we were unable to recover it. 00:35:53.077 [2024-11-20 00:00:27.144779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.077 [2024-11-20 00:00:27.144880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.077 [2024-11-20 00:00:27.144905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.077 [2024-11-20 00:00:27.144919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.077 [2024-11-20 00:00:27.144932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.077 [2024-11-20 00:00:27.144963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.077 qpair failed and we were unable to recover it. 00:35:53.077 [2024-11-20 00:00:27.154766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.077 [2024-11-20 00:00:27.154854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.077 [2024-11-20 00:00:27.154879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.077 [2024-11-20 00:00:27.154893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.077 [2024-11-20 00:00:27.154906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.077 [2024-11-20 00:00:27.154937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.077 qpair failed and we were unable to recover it. 00:35:53.077 [2024-11-20 00:00:27.164805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.077 [2024-11-20 00:00:27.164890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.077 [2024-11-20 00:00:27.164916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.077 [2024-11-20 00:00:27.164931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.077 [2024-11-20 00:00:27.164944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.077 [2024-11-20 00:00:27.164974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.077 qpair failed and we were unable to recover it. 00:35:53.077 [2024-11-20 00:00:27.174816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.077 [2024-11-20 00:00:27.174898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.077 [2024-11-20 00:00:27.174930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.077 [2024-11-20 00:00:27.174945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.077 [2024-11-20 00:00:27.174959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.077 [2024-11-20 00:00:27.175002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.077 qpair failed and we were unable to recover it. 00:35:53.077 [2024-11-20 00:00:27.184897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.077 [2024-11-20 00:00:27.184988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.077 [2024-11-20 00:00:27.185014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.077 [2024-11-20 00:00:27.185028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.077 [2024-11-20 00:00:27.185041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.077 [2024-11-20 00:00:27.185080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.077 qpair failed and we were unable to recover it. 00:35:53.077 [2024-11-20 00:00:27.194891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.077 [2024-11-20 00:00:27.194981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.077 [2024-11-20 00:00:27.195006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.077 [2024-11-20 00:00:27.195020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.077 [2024-11-20 00:00:27.195033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.077 [2024-11-20 00:00:27.195064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.077 qpair failed and we were unable to recover it. 00:35:53.077 [2024-11-20 00:00:27.204920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.077 [2024-11-20 00:00:27.205017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.077 [2024-11-20 00:00:27.205046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.077 [2024-11-20 00:00:27.205061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.077 [2024-11-20 00:00:27.205081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.077 [2024-11-20 00:00:27.205113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.077 qpair failed and we were unable to recover it. 00:35:53.077 [2024-11-20 00:00:27.215026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.077 [2024-11-20 00:00:27.215132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.077 [2024-11-20 00:00:27.215158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.077 [2024-11-20 00:00:27.215172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.077 [2024-11-20 00:00:27.215185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.077 [2024-11-20 00:00:27.215222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.077 qpair failed and we were unable to recover it. 00:35:53.077 [2024-11-20 00:00:27.224959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.077 [2024-11-20 00:00:27.225062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.225094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.225109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.225121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.225152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.234985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.235080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.235106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.235120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.235134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.235164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.245051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.245154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.245181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.245195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.245208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.245241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.255045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.255148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.255175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.255189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.255202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.255233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.265097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.265227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.265256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.265272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.265285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.265317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.275123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.275218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.275247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.275262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.275276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.275320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.285142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.285233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.285259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.285273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.285288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.285319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.295186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.295281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.295306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.295321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.295334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.295364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.305234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.305325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.305356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.305371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.305385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.305417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.315217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.315308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.315334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.315348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.315361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.315391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.325303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.325398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.325424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.325439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.325452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.325483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.335292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.335386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.335412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.335427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.335440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.335471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.345336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.345461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.078 [2024-11-20 00:00:27.345487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.078 [2024-11-20 00:00:27.345501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.078 [2024-11-20 00:00:27.345520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.078 [2024-11-20 00:00:27.345552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.078 qpair failed and we were unable to recover it. 00:35:53.078 [2024-11-20 00:00:27.355332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.078 [2024-11-20 00:00:27.355423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.079 [2024-11-20 00:00:27.355449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.079 [2024-11-20 00:00:27.355465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.079 [2024-11-20 00:00:27.355479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.079 [2024-11-20 00:00:27.355522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.079 qpair failed and we were unable to recover it. 00:35:53.079 [2024-11-20 00:00:27.365367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.079 [2024-11-20 00:00:27.365453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.079 [2024-11-20 00:00:27.365479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.079 [2024-11-20 00:00:27.365494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.079 [2024-11-20 00:00:27.365507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.079 [2024-11-20 00:00:27.365551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.079 qpair failed and we were unable to recover it. 00:35:53.079 [2024-11-20 00:00:27.375502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.079 [2024-11-20 00:00:27.375594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.079 [2024-11-20 00:00:27.375620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.079 [2024-11-20 00:00:27.375635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.079 [2024-11-20 00:00:27.375648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.079 [2024-11-20 00:00:27.375680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.079 qpair failed and we were unable to recover it. 00:35:53.079 [2024-11-20 00:00:27.385506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.079 [2024-11-20 00:00:27.385612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.079 [2024-11-20 00:00:27.385637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.079 [2024-11-20 00:00:27.385651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.079 [2024-11-20 00:00:27.385664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.079 [2024-11-20 00:00:27.385694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.079 qpair failed and we were unable to recover it. 00:35:53.339 [2024-11-20 00:00:27.395523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.339 [2024-11-20 00:00:27.395616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.339 [2024-11-20 00:00:27.395642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.339 [2024-11-20 00:00:27.395657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.339 [2024-11-20 00:00:27.395670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.339 [2024-11-20 00:00:27.395700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-11-20 00:00:27.405569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.339 [2024-11-20 00:00:27.405659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.339 [2024-11-20 00:00:27.405684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.339 [2024-11-20 00:00:27.405698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.339 [2024-11-20 00:00:27.405711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.339 [2024-11-20 00:00:27.405743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-11-20 00:00:27.415561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.339 [2024-11-20 00:00:27.415677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.339 [2024-11-20 00:00:27.415702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.339 [2024-11-20 00:00:27.415717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.339 [2024-11-20 00:00:27.415730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.339 [2024-11-20 00:00:27.415760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-11-20 00:00:27.425629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.339 [2024-11-20 00:00:27.425778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.339 [2024-11-20 00:00:27.425803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.339 [2024-11-20 00:00:27.425818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.339 [2024-11-20 00:00:27.425831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.339 [2024-11-20 00:00:27.425862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-11-20 00:00:27.435588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.339 [2024-11-20 00:00:27.435676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.339 [2024-11-20 00:00:27.435708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.339 [2024-11-20 00:00:27.435724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.339 [2024-11-20 00:00:27.435737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.339 [2024-11-20 00:00:27.435768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.339 qpair failed and we were unable to recover it. 00:35:53.339 [2024-11-20 00:00:27.445609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.339 [2024-11-20 00:00:27.445701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.339 [2024-11-20 00:00:27.445727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.339 [2024-11-20 00:00:27.445741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.445754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.445784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-11-20 00:00:27.455635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.340 [2024-11-20 00:00:27.455734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.340 [2024-11-20 00:00:27.455759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.340 [2024-11-20 00:00:27.455773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.455787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.455817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-11-20 00:00:27.465654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.340 [2024-11-20 00:00:27.465750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.340 [2024-11-20 00:00:27.465775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.340 [2024-11-20 00:00:27.465789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.465803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.465833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-11-20 00:00:27.475699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.340 [2024-11-20 00:00:27.475792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.340 [2024-11-20 00:00:27.475817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.340 [2024-11-20 00:00:27.475838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.475852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.475883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-11-20 00:00:27.485700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.340 [2024-11-20 00:00:27.485785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.340 [2024-11-20 00:00:27.485811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.340 [2024-11-20 00:00:27.485825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.485838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.485882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-11-20 00:00:27.495711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.340 [2024-11-20 00:00:27.495815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.340 [2024-11-20 00:00:27.495840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.340 [2024-11-20 00:00:27.495855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.495868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.495898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-11-20 00:00:27.505756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.340 [2024-11-20 00:00:27.505851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.340 [2024-11-20 00:00:27.505877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.340 [2024-11-20 00:00:27.505891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.505904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.505935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-11-20 00:00:27.515838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.340 [2024-11-20 00:00:27.515929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.340 [2024-11-20 00:00:27.515954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.340 [2024-11-20 00:00:27.515968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.515982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.516012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-11-20 00:00:27.525819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.340 [2024-11-20 00:00:27.525921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.340 [2024-11-20 00:00:27.525947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.340 [2024-11-20 00:00:27.525961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.525975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.526006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-11-20 00:00:27.535871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.340 [2024-11-20 00:00:27.535967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.340 [2024-11-20 00:00:27.535992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.340 [2024-11-20 00:00:27.536007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.536020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.536051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-11-20 00:00:27.545896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.340 [2024-11-20 00:00:27.545993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.340 [2024-11-20 00:00:27.546018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.340 [2024-11-20 00:00:27.546032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.546046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.546095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-11-20 00:00:27.555915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.340 [2024-11-20 00:00:27.556014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.340 [2024-11-20 00:00:27.556040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.340 [2024-11-20 00:00:27.556054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.556074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.556131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.340 [2024-11-20 00:00:27.565965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.340 [2024-11-20 00:00:27.566084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.340 [2024-11-20 00:00:27.566120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.340 [2024-11-20 00:00:27.566133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.340 [2024-11-20 00:00:27.566145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.340 [2024-11-20 00:00:27.566176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.340 qpair failed and we were unable to recover it. 00:35:53.341 [2024-11-20 00:00:27.576050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.341 [2024-11-20 00:00:27.576148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.341 [2024-11-20 00:00:27.576174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.341 [2024-11-20 00:00:27.576188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.341 [2024-11-20 00:00:27.576201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.341 [2024-11-20 00:00:27.576233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-11-20 00:00:27.585997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.341 [2024-11-20 00:00:27.586094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.341 [2024-11-20 00:00:27.586119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.341 [2024-11-20 00:00:27.586134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.341 [2024-11-20 00:00:27.586147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.341 [2024-11-20 00:00:27.586178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-11-20 00:00:27.596006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.341 [2024-11-20 00:00:27.596101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.341 [2024-11-20 00:00:27.596125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.341 [2024-11-20 00:00:27.596140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.341 [2024-11-20 00:00:27.596153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.341 [2024-11-20 00:00:27.596184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-11-20 00:00:27.606021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.341 [2024-11-20 00:00:27.606115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.341 [2024-11-20 00:00:27.606140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.341 [2024-11-20 00:00:27.606160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.341 [2024-11-20 00:00:27.606174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.341 [2024-11-20 00:00:27.606204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-11-20 00:00:27.616058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.341 [2024-11-20 00:00:27.616191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.341 [2024-11-20 00:00:27.616216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.341 [2024-11-20 00:00:27.616230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.341 [2024-11-20 00:00:27.616242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.341 [2024-11-20 00:00:27.616272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-11-20 00:00:27.626158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.341 [2024-11-20 00:00:27.626251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.341 [2024-11-20 00:00:27.626277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.341 [2024-11-20 00:00:27.626291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.341 [2024-11-20 00:00:27.626305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.341 [2024-11-20 00:00:27.626336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-11-20 00:00:27.636163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.341 [2024-11-20 00:00:27.636266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.341 [2024-11-20 00:00:27.636291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.341 [2024-11-20 00:00:27.636305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.341 [2024-11-20 00:00:27.636318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.341 [2024-11-20 00:00:27.636348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.341 [2024-11-20 00:00:27.646179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.341 [2024-11-20 00:00:27.646274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.341 [2024-11-20 00:00:27.646299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.341 [2024-11-20 00:00:27.646313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.341 [2024-11-20 00:00:27.646326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.341 [2024-11-20 00:00:27.646362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.341 qpair failed and we were unable to recover it. 00:35:53.602 [2024-11-20 00:00:27.656192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.602 [2024-11-20 00:00:27.656315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.602 [2024-11-20 00:00:27.656341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.602 [2024-11-20 00:00:27.656355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.602 [2024-11-20 00:00:27.656368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.602 [2024-11-20 00:00:27.656399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-11-20 00:00:27.666214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.602 [2024-11-20 00:00:27.666311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.602 [2024-11-20 00:00:27.666335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.602 [2024-11-20 00:00:27.666349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.602 [2024-11-20 00:00:27.666363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.602 [2024-11-20 00:00:27.666391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-11-20 00:00:27.676275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.602 [2024-11-20 00:00:27.676367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.602 [2024-11-20 00:00:27.676392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.602 [2024-11-20 00:00:27.676406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.602 [2024-11-20 00:00:27.676419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.602 [2024-11-20 00:00:27.676450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-11-20 00:00:27.686260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.602 [2024-11-20 00:00:27.686364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.602 [2024-11-20 00:00:27.686389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.602 [2024-11-20 00:00:27.686403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.602 [2024-11-20 00:00:27.686417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.602 [2024-11-20 00:00:27.686460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-11-20 00:00:27.696278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.602 [2024-11-20 00:00:27.696397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.602 [2024-11-20 00:00:27.696422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.602 [2024-11-20 00:00:27.696436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.602 [2024-11-20 00:00:27.696449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.602 [2024-11-20 00:00:27.696480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-11-20 00:00:27.706350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.602 [2024-11-20 00:00:27.706492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.602 [2024-11-20 00:00:27.706519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.602 [2024-11-20 00:00:27.706533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.602 [2024-11-20 00:00:27.706546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.602 [2024-11-20 00:00:27.706576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.602 qpair failed and we were unable to recover it. 00:35:53.602 [2024-11-20 00:00:27.716415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.602 [2024-11-20 00:00:27.716510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.603 [2024-11-20 00:00:27.716536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.603 [2024-11-20 00:00:27.716550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.603 [2024-11-20 00:00:27.716563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.603 [2024-11-20 00:00:27.716594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-11-20 00:00:27.726421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.603 [2024-11-20 00:00:27.726523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.603 [2024-11-20 00:00:27.726553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.603 [2024-11-20 00:00:27.726568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.603 [2024-11-20 00:00:27.726581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.603 [2024-11-20 00:00:27.726613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-11-20 00:00:27.736406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.603 [2024-11-20 00:00:27.736501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.603 [2024-11-20 00:00:27.736533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.603 [2024-11-20 00:00:27.736549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.603 [2024-11-20 00:00:27.736562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.603 [2024-11-20 00:00:27.736593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-11-20 00:00:27.746423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.603 [2024-11-20 00:00:27.746518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.603 [2024-11-20 00:00:27.746544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.603 [2024-11-20 00:00:27.746558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.603 [2024-11-20 00:00:27.746572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.603 [2024-11-20 00:00:27.746603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-11-20 00:00:27.756647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.603 [2024-11-20 00:00:27.756767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.603 [2024-11-20 00:00:27.756793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.603 [2024-11-20 00:00:27.756808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.603 [2024-11-20 00:00:27.756821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.603 [2024-11-20 00:00:27.756852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-11-20 00:00:27.766572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.603 [2024-11-20 00:00:27.766658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.603 [2024-11-20 00:00:27.766684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.603 [2024-11-20 00:00:27.766699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.603 [2024-11-20 00:00:27.766714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.603 [2024-11-20 00:00:27.766745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-11-20 00:00:27.776597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.603 [2024-11-20 00:00:27.776692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.603 [2024-11-20 00:00:27.776719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.603 [2024-11-20 00:00:27.776733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.603 [2024-11-20 00:00:27.776752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.603 [2024-11-20 00:00:27.776784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-11-20 00:00:27.786650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.603 [2024-11-20 00:00:27.786786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.603 [2024-11-20 00:00:27.786811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.603 [2024-11-20 00:00:27.786826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.603 [2024-11-20 00:00:27.786839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.603 [2024-11-20 00:00:27.786869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-11-20 00:00:27.796623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.603 [2024-11-20 00:00:27.796713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.603 [2024-11-20 00:00:27.796738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.603 [2024-11-20 00:00:27.796753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.603 [2024-11-20 00:00:27.796766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.603 [2024-11-20 00:00:27.796796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-11-20 00:00:27.806667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.603 [2024-11-20 00:00:27.806761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.603 [2024-11-20 00:00:27.806786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.603 [2024-11-20 00:00:27.806800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.603 [2024-11-20 00:00:27.806814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.603 [2024-11-20 00:00:27.806845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-11-20 00:00:27.816629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.603 [2024-11-20 00:00:27.816715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.603 [2024-11-20 00:00:27.816741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.603 [2024-11-20 00:00:27.816756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.603 [2024-11-20 00:00:27.816769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.603 [2024-11-20 00:00:27.816800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.603 qpair failed and we were unable to recover it. 00:35:53.603 [2024-11-20 00:00:27.826740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.603 [2024-11-20 00:00:27.826872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.604 [2024-11-20 00:00:27.826898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.604 [2024-11-20 00:00:27.826915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.604 [2024-11-20 00:00:27.826928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.604 [2024-11-20 00:00:27.826974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-11-20 00:00:27.836708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.604 [2024-11-20 00:00:27.836791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.604 [2024-11-20 00:00:27.836816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.604 [2024-11-20 00:00:27.836831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.604 [2024-11-20 00:00:27.836844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.604 [2024-11-20 00:00:27.836888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-11-20 00:00:27.846793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.604 [2024-11-20 00:00:27.846900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.604 [2024-11-20 00:00:27.846926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.604 [2024-11-20 00:00:27.846940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.604 [2024-11-20 00:00:27.846954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.604 [2024-11-20 00:00:27.846984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-11-20 00:00:27.856767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.604 [2024-11-20 00:00:27.856864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.604 [2024-11-20 00:00:27.856889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.604 [2024-11-20 00:00:27.856904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.604 [2024-11-20 00:00:27.856917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.604 [2024-11-20 00:00:27.856948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-11-20 00:00:27.866825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.604 [2024-11-20 00:00:27.866932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.604 [2024-11-20 00:00:27.866963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.604 [2024-11-20 00:00:27.866978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.604 [2024-11-20 00:00:27.866991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.604 [2024-11-20 00:00:27.867035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-11-20 00:00:27.876823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.604 [2024-11-20 00:00:27.876913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.604 [2024-11-20 00:00:27.876939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.604 [2024-11-20 00:00:27.876953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.604 [2024-11-20 00:00:27.876967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.604 [2024-11-20 00:00:27.876998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-11-20 00:00:27.886852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.604 [2024-11-20 00:00:27.886983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.604 [2024-11-20 00:00:27.887009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.604 [2024-11-20 00:00:27.887023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.604 [2024-11-20 00:00:27.887036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.604 [2024-11-20 00:00:27.887066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-11-20 00:00:27.896890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.604 [2024-11-20 00:00:27.896980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.604 [2024-11-20 00:00:27.897006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.604 [2024-11-20 00:00:27.897020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.604 [2024-11-20 00:00:27.897033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.604 [2024-11-20 00:00:27.897062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.604 [2024-11-20 00:00:27.907009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.604 [2024-11-20 00:00:27.907113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.604 [2024-11-20 00:00:27.907138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.604 [2024-11-20 00:00:27.907152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.604 [2024-11-20 00:00:27.907171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.604 [2024-11-20 00:00:27.907201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.604 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:27.916930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:27.917031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:27.917056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:27.917079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:27.917095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:27.917127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:27.926953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:27.927040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:27.927065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:27.927088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:27.927101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:27.927131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:27.936989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:27.937087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:27.937116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:27.937130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:27.937146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:27.937177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:27.947045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:27.947153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:27.947179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:27.947193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:27.947206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:27.947236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:27.957058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:27.957165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:27.957193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:27.957208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:27.957221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:27.957253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:27.967066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:27.967188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:27.967214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:27.967228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:27.967241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:27.967271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:27.977104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:27.977195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:27.977221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:27.977235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:27.977249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:27.977279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:27.987153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:27.987251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:27.987277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:27.987291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:27.987304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:27.987334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:27.997168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:27.997272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:27.997303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:27.997318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:27.997331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:27.997361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:28.007193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:28.007279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:28.007305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:28.007319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:28.007332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:28.007376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:28.017199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:28.017289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:28.017314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:28.017328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:28.017342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:28.017373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:28.027298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:28.027437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:28.027462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:28.027476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:28.027489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:28.027533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:28.037297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:28.037389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:28.037415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:28.037436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:28.037450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:28.037481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:28.047403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:28.047494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:28.047519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:28.047533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:28.047547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:28.047577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:28.057331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.865 [2024-11-20 00:00:28.057425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.865 [2024-11-20 00:00:28.057450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.865 [2024-11-20 00:00:28.057465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.865 [2024-11-20 00:00:28.057478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.865 [2024-11-20 00:00:28.057509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.865 qpair failed and we were unable to recover it. 00:35:53.865 [2024-11-20 00:00:28.067387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.866 [2024-11-20 00:00:28.067480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.866 [2024-11-20 00:00:28.067505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.866 [2024-11-20 00:00:28.067520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.866 [2024-11-20 00:00:28.067533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.866 [2024-11-20 00:00:28.067564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.866 qpair failed and we were unable to recover it. 00:35:53.866 [2024-11-20 00:00:28.077424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.866 [2024-11-20 00:00:28.077522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.866 [2024-11-20 00:00:28.077547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.866 [2024-11-20 00:00:28.077562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.866 [2024-11-20 00:00:28.077575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.866 [2024-11-20 00:00:28.077608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.866 qpair failed and we were unable to recover it. 00:35:53.866 [2024-11-20 00:00:28.087399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.866 [2024-11-20 00:00:28.087522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.866 [2024-11-20 00:00:28.087547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.866 [2024-11-20 00:00:28.087561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.866 [2024-11-20 00:00:28.087575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.866 [2024-11-20 00:00:28.087606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.866 qpair failed and we were unable to recover it. 00:35:53.866 [2024-11-20 00:00:28.097445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.866 [2024-11-20 00:00:28.097536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.866 [2024-11-20 00:00:28.097562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.866 [2024-11-20 00:00:28.097576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.866 [2024-11-20 00:00:28.097589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.866 [2024-11-20 00:00:28.097621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.866 qpair failed and we were unable to recover it. 00:35:53.866 [2024-11-20 00:00:28.107474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.866 [2024-11-20 00:00:28.107573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.866 [2024-11-20 00:00:28.107600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.866 [2024-11-20 00:00:28.107614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.866 [2024-11-20 00:00:28.107627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.866 [2024-11-20 00:00:28.107659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.866 qpair failed and we were unable to recover it. 00:35:53.866 [2024-11-20 00:00:28.117499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.866 [2024-11-20 00:00:28.117588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.866 [2024-11-20 00:00:28.117614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.866 [2024-11-20 00:00:28.117628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.866 [2024-11-20 00:00:28.117641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.866 [2024-11-20 00:00:28.117671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.866 qpair failed and we were unable to recover it. 00:35:53.866 [2024-11-20 00:00:28.127541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.866 [2024-11-20 00:00:28.127637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.866 [2024-11-20 00:00:28.127664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.866 [2024-11-20 00:00:28.127680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.866 [2024-11-20 00:00:28.127693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.866 [2024-11-20 00:00:28.127724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.866 qpair failed and we were unable to recover it. 00:35:53.866 [2024-11-20 00:00:28.137523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.866 [2024-11-20 00:00:28.137613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.866 [2024-11-20 00:00:28.137639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.866 [2024-11-20 00:00:28.137653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.866 [2024-11-20 00:00:28.137666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.866 [2024-11-20 00:00:28.137698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.866 qpair failed and we were unable to recover it. 00:35:53.866 [2024-11-20 00:00:28.147613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.866 [2024-11-20 00:00:28.147707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.866 [2024-11-20 00:00:28.147734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.866 [2024-11-20 00:00:28.147753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.866 [2024-11-20 00:00:28.147766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.866 [2024-11-20 00:00:28.147798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.866 qpair failed and we were unable to recover it. 00:35:53.866 [2024-11-20 00:00:28.157638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.866 [2024-11-20 00:00:28.157729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.866 [2024-11-20 00:00:28.157755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.866 [2024-11-20 00:00:28.157769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.866 [2024-11-20 00:00:28.157782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.866 [2024-11-20 00:00:28.157814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.866 qpair failed and we were unable to recover it. 00:35:53.866 [2024-11-20 00:00:28.167629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.866 [2024-11-20 00:00:28.167725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.866 [2024-11-20 00:00:28.167750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.866 [2024-11-20 00:00:28.167771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.866 [2024-11-20 00:00:28.167785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:53.866 [2024-11-20 00:00:28.167816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.866 qpair failed and we were unable to recover it. 00:35:54.127 [2024-11-20 00:00:28.177661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.127 [2024-11-20 00:00:28.177759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.127 [2024-11-20 00:00:28.177785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.127 [2024-11-20 00:00:28.177799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.127 [2024-11-20 00:00:28.177812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.127 [2024-11-20 00:00:28.177843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.127 qpair failed and we were unable to recover it. 00:35:54.127 [2024-11-20 00:00:28.187745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.127 [2024-11-20 00:00:28.187843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.127 [2024-11-20 00:00:28.187869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.127 [2024-11-20 00:00:28.187883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.127 [2024-11-20 00:00:28.187896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.127 [2024-11-20 00:00:28.187927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.127 qpair failed and we were unable to recover it. 00:35:54.127 [2024-11-20 00:00:28.197798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.127 [2024-11-20 00:00:28.197887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.127 [2024-11-20 00:00:28.197912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.127 [2024-11-20 00:00:28.197926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.127 [2024-11-20 00:00:28.197940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.127 [2024-11-20 00:00:28.197970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.127 qpair failed and we were unable to recover it. 00:35:54.127 [2024-11-20 00:00:28.207776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.127 [2024-11-20 00:00:28.207865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.127 [2024-11-20 00:00:28.207891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.127 [2024-11-20 00:00:28.207905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.127 [2024-11-20 00:00:28.207918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.127 [2024-11-20 00:00:28.207955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.127 qpair failed and we were unable to recover it. 00:35:54.127 [2024-11-20 00:00:28.217866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.127 [2024-11-20 00:00:28.217982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.127 [2024-11-20 00:00:28.218008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.127 [2024-11-20 00:00:28.218023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.127 [2024-11-20 00:00:28.218037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.127 [2024-11-20 00:00:28.218077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.127 qpair failed and we were unable to recover it. 00:35:54.127 [2024-11-20 00:00:28.227890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.127 [2024-11-20 00:00:28.228025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.127 [2024-11-20 00:00:28.228051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.127 [2024-11-20 00:00:28.228065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.127 [2024-11-20 00:00:28.228087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.127 [2024-11-20 00:00:28.228131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.127 qpair failed and we were unable to recover it. 00:35:54.127 [2024-11-20 00:00:28.237901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.127 [2024-11-20 00:00:28.238016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.127 [2024-11-20 00:00:28.238041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.127 [2024-11-20 00:00:28.238056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.127 [2024-11-20 00:00:28.238075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.127 [2024-11-20 00:00:28.238108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.127 qpair failed and we were unable to recover it. 00:35:54.127 [2024-11-20 00:00:28.247962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.127 [2024-11-20 00:00:28.248054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.127 [2024-11-20 00:00:28.248088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.248103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.248117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.248149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.257888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.257976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.128 [2024-11-20 00:00:28.258001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.258016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.258029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.258061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.267931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.268028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.128 [2024-11-20 00:00:28.268054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.268075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.268092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.268122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.278029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.278156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.128 [2024-11-20 00:00:28.278182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.278196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.278209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.278240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.287964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.288049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.128 [2024-11-20 00:00:28.288083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.288099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.288113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.288143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.298020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.298121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.128 [2024-11-20 00:00:28.298152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.298167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.298180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.298211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.308065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.308166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.128 [2024-11-20 00:00:28.308191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.308205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.308218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.308250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.318094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.318188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.128 [2024-11-20 00:00:28.318213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.318227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.318240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.318270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.328094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.328198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.128 [2024-11-20 00:00:28.328227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.328243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.328256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.328288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.338145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.338269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.128 [2024-11-20 00:00:28.338295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.338310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.338328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.338376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.348163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.348271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.128 [2024-11-20 00:00:28.348297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.348312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.348326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.348359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.358187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.358276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.128 [2024-11-20 00:00:28.358303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.358317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.358330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.358360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.368192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.368283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.128 [2024-11-20 00:00:28.368308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.128 [2024-11-20 00:00:28.368322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.128 [2024-11-20 00:00:28.368336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.128 [2024-11-20 00:00:28.368367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.128 qpair failed and we were unable to recover it. 00:35:54.128 [2024-11-20 00:00:28.378264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.128 [2024-11-20 00:00:28.378358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.129 [2024-11-20 00:00:28.378384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.129 [2024-11-20 00:00:28.378398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.129 [2024-11-20 00:00:28.378412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.129 [2024-11-20 00:00:28.378441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.129 qpair failed and we were unable to recover it. 00:35:54.129 [2024-11-20 00:00:28.388281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.129 [2024-11-20 00:00:28.388416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.129 [2024-11-20 00:00:28.388442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.129 [2024-11-20 00:00:28.388456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.129 [2024-11-20 00:00:28.388469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.129 [2024-11-20 00:00:28.388503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.129 qpair failed and we were unable to recover it. 00:35:54.129 [2024-11-20 00:00:28.398330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.129 [2024-11-20 00:00:28.398422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.129 [2024-11-20 00:00:28.398448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.129 [2024-11-20 00:00:28.398463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.129 [2024-11-20 00:00:28.398476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.129 [2024-11-20 00:00:28.398506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.129 qpair failed and we were unable to recover it. 00:35:54.129 [2024-11-20 00:00:28.408352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.129 [2024-11-20 00:00:28.408436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.129 [2024-11-20 00:00:28.408465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.129 [2024-11-20 00:00:28.408481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.129 [2024-11-20 00:00:28.408494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.129 [2024-11-20 00:00:28.408524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.129 qpair failed and we were unable to recover it. 00:35:54.129 [2024-11-20 00:00:28.418354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.129 [2024-11-20 00:00:28.418444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.129 [2024-11-20 00:00:28.418471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.129 [2024-11-20 00:00:28.418485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.129 [2024-11-20 00:00:28.418499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.129 [2024-11-20 00:00:28.418530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.129 qpair failed and we were unable to recover it. 00:35:54.129 [2024-11-20 00:00:28.428395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.129 [2024-11-20 00:00:28.428487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.129 [2024-11-20 00:00:28.428518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.129 [2024-11-20 00:00:28.428533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.129 [2024-11-20 00:00:28.428547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.129 [2024-11-20 00:00:28.428577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.129 qpair failed and we were unable to recover it. 00:35:54.389 [2024-11-20 00:00:28.438442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.389 [2024-11-20 00:00:28.438552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.389 [2024-11-20 00:00:28.438578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.389 [2024-11-20 00:00:28.438592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.389 [2024-11-20 00:00:28.438606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.389 [2024-11-20 00:00:28.438637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-11-20 00:00:28.448457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.389 [2024-11-20 00:00:28.448545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.389 [2024-11-20 00:00:28.448571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.389 [2024-11-20 00:00:28.448585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.389 [2024-11-20 00:00:28.448598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.389 [2024-11-20 00:00:28.448627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-11-20 00:00:28.458462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.389 [2024-11-20 00:00:28.458557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.389 [2024-11-20 00:00:28.458583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.389 [2024-11-20 00:00:28.458598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.389 [2024-11-20 00:00:28.458611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.389 [2024-11-20 00:00:28.458654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-11-20 00:00:28.468502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.389 [2024-11-20 00:00:28.468594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.389 [2024-11-20 00:00:28.468620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.389 [2024-11-20 00:00:28.468634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.389 [2024-11-20 00:00:28.468654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.389 [2024-11-20 00:00:28.468684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-11-20 00:00:28.478523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.389 [2024-11-20 00:00:28.478650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.389 [2024-11-20 00:00:28.478676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.389 [2024-11-20 00:00:28.478693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.389 [2024-11-20 00:00:28.478707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.389 [2024-11-20 00:00:28.478738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-11-20 00:00:28.488571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.389 [2024-11-20 00:00:28.488654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.389 [2024-11-20 00:00:28.488681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.389 [2024-11-20 00:00:28.488695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.389 [2024-11-20 00:00:28.488708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.389 [2024-11-20 00:00:28.488752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-11-20 00:00:28.498589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.389 [2024-11-20 00:00:28.498678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.389 [2024-11-20 00:00:28.498704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.389 [2024-11-20 00:00:28.498718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.389 [2024-11-20 00:00:28.498731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.389 [2024-11-20 00:00:28.498762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-11-20 00:00:28.508660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.389 [2024-11-20 00:00:28.508773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.389 [2024-11-20 00:00:28.508799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.389 [2024-11-20 00:00:28.508813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.389 [2024-11-20 00:00:28.508825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.389 [2024-11-20 00:00:28.508855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-11-20 00:00:28.518683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.389 [2024-11-20 00:00:28.518794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.389 [2024-11-20 00:00:28.518820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.389 [2024-11-20 00:00:28.518834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.389 [2024-11-20 00:00:28.518847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.389 [2024-11-20 00:00:28.518877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-11-20 00:00:28.528691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.389 [2024-11-20 00:00:28.528776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.389 [2024-11-20 00:00:28.528802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.389 [2024-11-20 00:00:28.528817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.389 [2024-11-20 00:00:28.528830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.389 [2024-11-20 00:00:28.528874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-11-20 00:00:28.538676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.389 [2024-11-20 00:00:28.538762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.389 [2024-11-20 00:00:28.538788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.389 [2024-11-20 00:00:28.538802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.389 [2024-11-20 00:00:28.538816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.389 [2024-11-20 00:00:28.538846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-11-20 00:00:28.548731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.389 [2024-11-20 00:00:28.548822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.389 [2024-11-20 00:00:28.548848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.389 [2024-11-20 00:00:28.548862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.390 [2024-11-20 00:00:28.548875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.390 [2024-11-20 00:00:28.548906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-11-20 00:00:28.558765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.390 [2024-11-20 00:00:28.558856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.390 [2024-11-20 00:00:28.558887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.390 [2024-11-20 00:00:28.558902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.390 [2024-11-20 00:00:28.558916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.390 [2024-11-20 00:00:28.558946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-11-20 00:00:28.568831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.390 [2024-11-20 00:00:28.568952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.390 [2024-11-20 00:00:28.568978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.390 [2024-11-20 00:00:28.568992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.390 [2024-11-20 00:00:28.569004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.390 [2024-11-20 00:00:28.569034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-11-20 00:00:28.578818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.390 [2024-11-20 00:00:28.578909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.390 [2024-11-20 00:00:28.578935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.390 [2024-11-20 00:00:28.578949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.390 [2024-11-20 00:00:28.578961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.390 [2024-11-20 00:00:28.578992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-11-20 00:00:28.588857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.390 [2024-11-20 00:00:28.588951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.390 [2024-11-20 00:00:28.588977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.390 [2024-11-20 00:00:28.588991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.390 [2024-11-20 00:00:28.589004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.390 [2024-11-20 00:00:28.589034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-11-20 00:00:28.598885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.390 [2024-11-20 00:00:28.598982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.390 [2024-11-20 00:00:28.599007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.390 [2024-11-20 00:00:28.599028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.390 [2024-11-20 00:00:28.599043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.390 [2024-11-20 00:00:28.599081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-11-20 00:00:28.608907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.390 [2024-11-20 00:00:28.608990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.390 [2024-11-20 00:00:28.609015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.390 [2024-11-20 00:00:28.609029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.390 [2024-11-20 00:00:28.609043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.390 [2024-11-20 00:00:28.609079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-11-20 00:00:28.618966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.390 [2024-11-20 00:00:28.619056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.390 [2024-11-20 00:00:28.619090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.390 [2024-11-20 00:00:28.619105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.390 [2024-11-20 00:00:28.619117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.390 [2024-11-20 00:00:28.619147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-11-20 00:00:28.628980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.390 [2024-11-20 00:00:28.629085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.390 [2024-11-20 00:00:28.629110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.390 [2024-11-20 00:00:28.629125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.390 [2024-11-20 00:00:28.629138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.390 [2024-11-20 00:00:28.629169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-11-20 00:00:28.639066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.390 [2024-11-20 00:00:28.639163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.390 [2024-11-20 00:00:28.639188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.390 [2024-11-20 00:00:28.639202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.390 [2024-11-20 00:00:28.639215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.390 [2024-11-20 00:00:28.639247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-11-20 00:00:28.649117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.390 [2024-11-20 00:00:28.649207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.390 [2024-11-20 00:00:28.649232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.390 [2024-11-20 00:00:28.649246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.390 [2024-11-20 00:00:28.649259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.390 [2024-11-20 00:00:28.649290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-11-20 00:00:28.659056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.390 [2024-11-20 00:00:28.659159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.390 [2024-11-20 00:00:28.659185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.390 [2024-11-20 00:00:28.659199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.390 [2024-11-20 00:00:28.659212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.390 [2024-11-20 00:00:28.659243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-11-20 00:00:28.669105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.390 [2024-11-20 00:00:28.669206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.390 [2024-11-20 00:00:28.669232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.390 [2024-11-20 00:00:28.669246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.391 [2024-11-20 00:00:28.669259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.391 [2024-11-20 00:00:28.669304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-11-20 00:00:28.679108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.391 [2024-11-20 00:00:28.679201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.391 [2024-11-20 00:00:28.679227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.391 [2024-11-20 00:00:28.679241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.391 [2024-11-20 00:00:28.679254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.391 [2024-11-20 00:00:28.679285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-11-20 00:00:28.689148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.391 [2024-11-20 00:00:28.689247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.391 [2024-11-20 00:00:28.689273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.391 [2024-11-20 00:00:28.689287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.391 [2024-11-20 00:00:28.689301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.391 [2024-11-20 00:00:28.689331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.651 [2024-11-20 00:00:28.699193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.651 [2024-11-20 00:00:28.699287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.651 [2024-11-20 00:00:28.699315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.651 [2024-11-20 00:00:28.699330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.651 [2024-11-20 00:00:28.699343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.651 [2024-11-20 00:00:28.699374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-11-20 00:00:28.709311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.651 [2024-11-20 00:00:28.709410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.651 [2024-11-20 00:00:28.709435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.651 [2024-11-20 00:00:28.709449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.651 [2024-11-20 00:00:28.709462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.651 [2024-11-20 00:00:28.709492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-11-20 00:00:28.719247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.651 [2024-11-20 00:00:28.719335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.651 [2024-11-20 00:00:28.719361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.651 [2024-11-20 00:00:28.719376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.651 [2024-11-20 00:00:28.719388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.651 [2024-11-20 00:00:28.719421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-11-20 00:00:28.729282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.651 [2024-11-20 00:00:28.729377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.651 [2024-11-20 00:00:28.729403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.651 [2024-11-20 00:00:28.729424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.651 [2024-11-20 00:00:28.729437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.651 [2024-11-20 00:00:28.729468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-11-20 00:00:28.739299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.651 [2024-11-20 00:00:28.739423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.651 [2024-11-20 00:00:28.739448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.651 [2024-11-20 00:00:28.739462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.651 [2024-11-20 00:00:28.739476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.651 [2024-11-20 00:00:28.739507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-11-20 00:00:28.749321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.651 [2024-11-20 00:00:28.749420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.651 [2024-11-20 00:00:28.749445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.651 [2024-11-20 00:00:28.749459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.651 [2024-11-20 00:00:28.749472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.651 [2024-11-20 00:00:28.749505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-11-20 00:00:28.759359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.651 [2024-11-20 00:00:28.759479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.651 [2024-11-20 00:00:28.759504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.651 [2024-11-20 00:00:28.759518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.651 [2024-11-20 00:00:28.759532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.651 [2024-11-20 00:00:28.759563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-11-20 00:00:28.769362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.651 [2024-11-20 00:00:28.769449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.651 [2024-11-20 00:00:28.769474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.651 [2024-11-20 00:00:28.769488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.651 [2024-11-20 00:00:28.769502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.651 [2024-11-20 00:00:28.769538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-11-20 00:00:28.779400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.651 [2024-11-20 00:00:28.779497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.651 [2024-11-20 00:00:28.779523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.651 [2024-11-20 00:00:28.779538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.651 [2024-11-20 00:00:28.779551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.651 [2024-11-20 00:00:28.779582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-11-20 00:00:28.789443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.651 [2024-11-20 00:00:28.789536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.651 [2024-11-20 00:00:28.789561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.651 [2024-11-20 00:00:28.789575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.651 [2024-11-20 00:00:28.789589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.651 [2024-11-20 00:00:28.789619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-11-20 00:00:28.799475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.651 [2024-11-20 00:00:28.799577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.651 [2024-11-20 00:00:28.799602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.651 [2024-11-20 00:00:28.799616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.651 [2024-11-20 00:00:28.799629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.651 [2024-11-20 00:00:28.799659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-11-20 00:00:28.809534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.652 [2024-11-20 00:00:28.809634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.652 [2024-11-20 00:00:28.809663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.652 [2024-11-20 00:00:28.809679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.652 [2024-11-20 00:00:28.809692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.652 [2024-11-20 00:00:28.809724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-11-20 00:00:28.819530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.652 [2024-11-20 00:00:28.819621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.652 [2024-11-20 00:00:28.819648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.652 [2024-11-20 00:00:28.819662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.652 [2024-11-20 00:00:28.819675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.652 [2024-11-20 00:00:28.819707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-11-20 00:00:28.829600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.652 [2024-11-20 00:00:28.829708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.652 [2024-11-20 00:00:28.829734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.652 [2024-11-20 00:00:28.829749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.652 [2024-11-20 00:00:28.829762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.652 [2024-11-20 00:00:28.829793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-11-20 00:00:28.839625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.652 [2024-11-20 00:00:28.839722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.652 [2024-11-20 00:00:28.839748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.652 [2024-11-20 00:00:28.839762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.652 [2024-11-20 00:00:28.839775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.652 [2024-11-20 00:00:28.839806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-11-20 00:00:28.849621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.652 [2024-11-20 00:00:28.849758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.652 [2024-11-20 00:00:28.849784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.652 [2024-11-20 00:00:28.849799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.652 [2024-11-20 00:00:28.849813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.652 [2024-11-20 00:00:28.849843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-11-20 00:00:28.859666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.652 [2024-11-20 00:00:28.859754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.652 [2024-11-20 00:00:28.859786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.652 [2024-11-20 00:00:28.859801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.652 [2024-11-20 00:00:28.859817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.652 [2024-11-20 00:00:28.859861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-11-20 00:00:28.869680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.652 [2024-11-20 00:00:28.869781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.652 [2024-11-20 00:00:28.869807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.652 [2024-11-20 00:00:28.869821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.652 [2024-11-20 00:00:28.869834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.652 [2024-11-20 00:00:28.869865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-11-20 00:00:28.879693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.652 [2024-11-20 00:00:28.879826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.652 [2024-11-20 00:00:28.879852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.652 [2024-11-20 00:00:28.879866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.652 [2024-11-20 00:00:28.879879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.652 [2024-11-20 00:00:28.879909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-11-20 00:00:28.889802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.652 [2024-11-20 00:00:28.889894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.652 [2024-11-20 00:00:28.889919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.652 [2024-11-20 00:00:28.889933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.652 [2024-11-20 00:00:28.889947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.652 [2024-11-20 00:00:28.889977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-11-20 00:00:28.899755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.652 [2024-11-20 00:00:28.899846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.652 [2024-11-20 00:00:28.899872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.652 [2024-11-20 00:00:28.899886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.652 [2024-11-20 00:00:28.899905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.652 [2024-11-20 00:00:28.899938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-11-20 00:00:28.909786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.652 [2024-11-20 00:00:28.909880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.652 [2024-11-20 00:00:28.909906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.652 [2024-11-20 00:00:28.909920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.652 [2024-11-20 00:00:28.909933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.652 [2024-11-20 00:00:28.909965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-11-20 00:00:28.919916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.652 [2024-11-20 00:00:28.920005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.652 [2024-11-20 00:00:28.920030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.652 [2024-11-20 00:00:28.920045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.652 [2024-11-20 00:00:28.920076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.653 [2024-11-20 00:00:28.920109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-11-20 00:00:28.929879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.653 [2024-11-20 00:00:28.929971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.653 [2024-11-20 00:00:28.929997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.653 [2024-11-20 00:00:28.930011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.653 [2024-11-20 00:00:28.930024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.653 [2024-11-20 00:00:28.930055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-11-20 00:00:28.939895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.653 [2024-11-20 00:00:28.940007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.653 [2024-11-20 00:00:28.940033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.653 [2024-11-20 00:00:28.940047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.653 [2024-11-20 00:00:28.940060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.653 [2024-11-20 00:00:28.940116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-11-20 00:00:28.949986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.653 [2024-11-20 00:00:28.950093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.653 [2024-11-20 00:00:28.950119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.653 [2024-11-20 00:00:28.950133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.653 [2024-11-20 00:00:28.950147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.653 [2024-11-20 00:00:28.950179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.912 [2024-11-20 00:00:28.959957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.912 [2024-11-20 00:00:28.960059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.912 [2024-11-20 00:00:28.960099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.912 [2024-11-20 00:00:28.960113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.912 [2024-11-20 00:00:28.960127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.912 [2024-11-20 00:00:28.960159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.912 qpair failed and we were unable to recover it. 00:35:54.912 [2024-11-20 00:00:28.969956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.912 [2024-11-20 00:00:28.970049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.912 [2024-11-20 00:00:28.970082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.912 [2024-11-20 00:00:28.970098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.912 [2024-11-20 00:00:28.970111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.912 [2024-11-20 00:00:28.970145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.912 qpair failed and we were unable to recover it. 00:35:54.912 [2024-11-20 00:00:28.979976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.912 [2024-11-20 00:00:28.980109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.912 [2024-11-20 00:00:28.980136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.912 [2024-11-20 00:00:28.980150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.912 [2024-11-20 00:00:28.980164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.913 [2024-11-20 00:00:28.980195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.913 qpair failed and we were unable to recover it. 00:35:54.913 [2024-11-20 00:00:28.990018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.913 [2024-11-20 00:00:28.990124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.913 [2024-11-20 00:00:28.990156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.913 [2024-11-20 00:00:28.990171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.913 [2024-11-20 00:00:28.990185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.913 [2024-11-20 00:00:28.990215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.913 qpair failed and we were unable to recover it. 00:35:54.913 [2024-11-20 00:00:29.000060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.913 [2024-11-20 00:00:29.000161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.913 [2024-11-20 00:00:29.000187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.913 [2024-11-20 00:00:29.000200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.913 [2024-11-20 00:00:29.000214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.913 [2024-11-20 00:00:29.000245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.913 qpair failed and we were unable to recover it. 00:35:54.913 [2024-11-20 00:00:29.010154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.913 [2024-11-20 00:00:29.010267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.913 [2024-11-20 00:00:29.010293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.913 [2024-11-20 00:00:29.010307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.913 [2024-11-20 00:00:29.010321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.913 [2024-11-20 00:00:29.010352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.913 qpair failed and we were unable to recover it. 00:35:54.913 [2024-11-20 00:00:29.020112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.913 [2024-11-20 00:00:29.020236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.913 [2024-11-20 00:00:29.020261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.913 [2024-11-20 00:00:29.020276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.913 [2024-11-20 00:00:29.020289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.913 [2024-11-20 00:00:29.020320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.913 qpair failed and we were unable to recover it. 00:35:54.913 [2024-11-20 00:00:29.030146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.913 [2024-11-20 00:00:29.030242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.913 [2024-11-20 00:00:29.030268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.913 [2024-11-20 00:00:29.030282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.913 [2024-11-20 00:00:29.030302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.913 [2024-11-20 00:00:29.030335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.913 qpair failed and we were unable to recover it. 00:35:54.913 [2024-11-20 00:00:29.040179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.913 [2024-11-20 00:00:29.040273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.913 [2024-11-20 00:00:29.040298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.913 [2024-11-20 00:00:29.040312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.913 [2024-11-20 00:00:29.040326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.913 [2024-11-20 00:00:29.040357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.913 qpair failed and we were unable to recover it. 00:35:54.913 [2024-11-20 00:00:29.050201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.913 [2024-11-20 00:00:29.050287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.913 [2024-11-20 00:00:29.050312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.913 [2024-11-20 00:00:29.050326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.913 [2024-11-20 00:00:29.050339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.913 [2024-11-20 00:00:29.050369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.913 qpair failed and we were unable to recover it. 00:35:54.913 [2024-11-20 00:00:29.060235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.913 [2024-11-20 00:00:29.060355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.913 [2024-11-20 00:00:29.060380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.913 [2024-11-20 00:00:29.060395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.913 [2024-11-20 00:00:29.060408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.913 [2024-11-20 00:00:29.060438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.913 qpair failed and we were unable to recover it. 00:35:54.913 [2024-11-20 00:00:29.070258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.913 [2024-11-20 00:00:29.070402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.913 [2024-11-20 00:00:29.070427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.913 [2024-11-20 00:00:29.070442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.913 [2024-11-20 00:00:29.070455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.913 [2024-11-20 00:00:29.070485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.913 qpair failed and we were unable to recover it. 00:35:54.913 [2024-11-20 00:00:29.080309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.913 [2024-11-20 00:00:29.080397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.913 [2024-11-20 00:00:29.080423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.913 [2024-11-20 00:00:29.080437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.913 [2024-11-20 00:00:29.080450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.913 [2024-11-20 00:00:29.080493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.913 qpair failed and we were unable to recover it. 00:35:54.913 [2024-11-20 00:00:29.090291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.913 [2024-11-20 00:00:29.090383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.913 [2024-11-20 00:00:29.090409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.913 [2024-11-20 00:00:29.090423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.913 [2024-11-20 00:00:29.090436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.913 [2024-11-20 00:00:29.090468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.913 qpair failed and we were unable to recover it. 00:35:54.913 [2024-11-20 00:00:29.100321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.914 [2024-11-20 00:00:29.100456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.914 [2024-11-20 00:00:29.100482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.914 [2024-11-20 00:00:29.100496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.914 [2024-11-20 00:00:29.100510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.914 [2024-11-20 00:00:29.100541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.914 qpair failed and we were unable to recover it. 00:35:54.914 [2024-11-20 00:00:29.110406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.914 [2024-11-20 00:00:29.110504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.914 [2024-11-20 00:00:29.110530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.914 [2024-11-20 00:00:29.110544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.914 [2024-11-20 00:00:29.110557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.914 [2024-11-20 00:00:29.110600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.914 qpair failed and we were unable to recover it. 00:35:54.914 [2024-11-20 00:00:29.120380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.914 [2024-11-20 00:00:29.120472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.914 [2024-11-20 00:00:29.120502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.914 [2024-11-20 00:00:29.120517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.914 [2024-11-20 00:00:29.120530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.914 [2024-11-20 00:00:29.120561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.914 qpair failed and we were unable to recover it. 00:35:54.914 [2024-11-20 00:00:29.130444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.914 [2024-11-20 00:00:29.130550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.914 [2024-11-20 00:00:29.130575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.914 [2024-11-20 00:00:29.130589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.914 [2024-11-20 00:00:29.130602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.914 [2024-11-20 00:00:29.130632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.914 qpair failed and we were unable to recover it. 00:35:54.914 [2024-11-20 00:00:29.140536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.914 [2024-11-20 00:00:29.140636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.914 [2024-11-20 00:00:29.140662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.914 [2024-11-20 00:00:29.140676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.914 [2024-11-20 00:00:29.140688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.914 [2024-11-20 00:00:29.140719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.914 qpair failed and we were unable to recover it. 00:35:54.914 [2024-11-20 00:00:29.150501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.914 [2024-11-20 00:00:29.150596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.914 [2024-11-20 00:00:29.150621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.914 [2024-11-20 00:00:29.150635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.914 [2024-11-20 00:00:29.150648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.914 [2024-11-20 00:00:29.150680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.914 qpair failed and we were unable to recover it. 00:35:54.914 [2024-11-20 00:00:29.160549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.914 [2024-11-20 00:00:29.160637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.914 [2024-11-20 00:00:29.160662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.914 [2024-11-20 00:00:29.160683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.914 [2024-11-20 00:00:29.160696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.914 [2024-11-20 00:00:29.160726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.914 qpair failed and we were unable to recover it. 00:35:54.914 [2024-11-20 00:00:29.170543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.914 [2024-11-20 00:00:29.170633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.914 [2024-11-20 00:00:29.170658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.914 [2024-11-20 00:00:29.170671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.914 [2024-11-20 00:00:29.170685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.914 [2024-11-20 00:00:29.170715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.914 qpair failed and we were unable to recover it. 00:35:54.914 [2024-11-20 00:00:29.180568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.914 [2024-11-20 00:00:29.180660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.914 [2024-11-20 00:00:29.180687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.914 [2024-11-20 00:00:29.180701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.914 [2024-11-20 00:00:29.180714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.914 [2024-11-20 00:00:29.180744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.914 qpair failed and we were unable to recover it. 00:35:54.914 [2024-11-20 00:00:29.190611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.914 [2024-11-20 00:00:29.190740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.914 [2024-11-20 00:00:29.190765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.914 [2024-11-20 00:00:29.190779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.914 [2024-11-20 00:00:29.190792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.914 [2024-11-20 00:00:29.190823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.914 qpair failed and we were unable to recover it. 00:35:54.914 [2024-11-20 00:00:29.200695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.914 [2024-11-20 00:00:29.200792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.914 [2024-11-20 00:00:29.200817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.914 [2024-11-20 00:00:29.200831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.914 [2024-11-20 00:00:29.200845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.914 [2024-11-20 00:00:29.200881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.914 qpair failed and we were unable to recover it. 00:35:54.914 [2024-11-20 00:00:29.210679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.914 [2024-11-20 00:00:29.210767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.915 [2024-11-20 00:00:29.210792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.915 [2024-11-20 00:00:29.210806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.915 [2024-11-20 00:00:29.210819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.915 [2024-11-20 00:00:29.210850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.915 qpair failed and we were unable to recover it. 00:35:54.915 [2024-11-20 00:00:29.220700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.915 [2024-11-20 00:00:29.220787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.915 [2024-11-20 00:00:29.220813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.915 [2024-11-20 00:00:29.220827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.915 [2024-11-20 00:00:29.220839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:54.915 [2024-11-20 00:00:29.220870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:54.915 qpair failed and we were unable to recover it. 00:35:55.173 [2024-11-20 00:00:29.230774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.173 [2024-11-20 00:00:29.230890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.173 [2024-11-20 00:00:29.230916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.173 [2024-11-20 00:00:29.230930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.173 [2024-11-20 00:00:29.230944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.173 [2024-11-20 00:00:29.230977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.173 qpair failed and we were unable to recover it. 00:35:55.173 [2024-11-20 00:00:29.240773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.173 [2024-11-20 00:00:29.240865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.173 [2024-11-20 00:00:29.240892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.173 [2024-11-20 00:00:29.240906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.173 [2024-11-20 00:00:29.240919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.173 [2024-11-20 00:00:29.240951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.173 qpair failed and we were unable to recover it. 00:35:55.173 [2024-11-20 00:00:29.250752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.173 [2024-11-20 00:00:29.250882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.173 [2024-11-20 00:00:29.250909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.173 [2024-11-20 00:00:29.250923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.173 [2024-11-20 00:00:29.250937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.173 [2024-11-20 00:00:29.250967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.173 qpair failed and we were unable to recover it. 00:35:55.173 [2024-11-20 00:00:29.260803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.173 [2024-11-20 00:00:29.260927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.173 [2024-11-20 00:00:29.260953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.173 [2024-11-20 00:00:29.260967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.173 [2024-11-20 00:00:29.260981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.173 [2024-11-20 00:00:29.261011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.173 qpair failed and we were unable to recover it. 00:35:55.173 [2024-11-20 00:00:29.270870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.173 [2024-11-20 00:00:29.270965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.173 [2024-11-20 00:00:29.270990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.173 [2024-11-20 00:00:29.271004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.173 [2024-11-20 00:00:29.271018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.271049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.280844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.280931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.280957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.280971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.280984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.281014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.290913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.291007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.291032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.291052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.291066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.291109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.300905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.301021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.301047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.301061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.301083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.301115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.310974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.311084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.311109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.311123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.311135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.311166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.321008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.321115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.321140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.321154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.321167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.321197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.330999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.331097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.331122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.331136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.331149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.331186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.341064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.341167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.341193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.341207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.341221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.341251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.351087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.351181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.351206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.351220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.351233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.351263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.361125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.361250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.361276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.361290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.361303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.361333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.371113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.371197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.371223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.371237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.371251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.371283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.381137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.381222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.381249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.381263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.381276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.381308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.391189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.391284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.391310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.391325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.391338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.174 [2024-11-20 00:00:29.391381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.174 qpair failed and we were unable to recover it. 00:35:55.174 [2024-11-20 00:00:29.401251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.174 [2024-11-20 00:00:29.401343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.174 [2024-11-20 00:00:29.401369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.174 [2024-11-20 00:00:29.401383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.174 [2024-11-20 00:00:29.401396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.175 [2024-11-20 00:00:29.401427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.175 qpair failed and we were unable to recover it. 00:35:55.175 [2024-11-20 00:00:29.411264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.175 [2024-11-20 00:00:29.411354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.175 [2024-11-20 00:00:29.411380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.175 [2024-11-20 00:00:29.411395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.175 [2024-11-20 00:00:29.411408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.175 [2024-11-20 00:00:29.411438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.175 qpair failed and we were unable to recover it. 00:35:55.175 [2024-11-20 00:00:29.421236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.175 [2024-11-20 00:00:29.421333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.175 [2024-11-20 00:00:29.421364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.175 [2024-11-20 00:00:29.421379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.175 [2024-11-20 00:00:29.421393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.175 [2024-11-20 00:00:29.421424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.175 qpair failed and we were unable to recover it. 00:35:55.175 [2024-11-20 00:00:29.431310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.175 [2024-11-20 00:00:29.431432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.175 [2024-11-20 00:00:29.431457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.175 [2024-11-20 00:00:29.431471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.175 [2024-11-20 00:00:29.431485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.175 [2024-11-20 00:00:29.431516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.175 qpair failed and we were unable to recover it. 00:35:55.175 [2024-11-20 00:00:29.441354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.175 [2024-11-20 00:00:29.441447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.175 [2024-11-20 00:00:29.441473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.175 [2024-11-20 00:00:29.441487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.175 [2024-11-20 00:00:29.441501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.175 [2024-11-20 00:00:29.441531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.175 qpair failed and we were unable to recover it. 00:35:55.175 [2024-11-20 00:00:29.451440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.175 [2024-11-20 00:00:29.451529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.175 [2024-11-20 00:00:29.451555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.175 [2024-11-20 00:00:29.451569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.175 [2024-11-20 00:00:29.451582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.175 [2024-11-20 00:00:29.451613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.175 qpair failed and we were unable to recover it. 00:35:55.175 [2024-11-20 00:00:29.461462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.175 [2024-11-20 00:00:29.461557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.175 [2024-11-20 00:00:29.461582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.175 [2024-11-20 00:00:29.461597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.175 [2024-11-20 00:00:29.461615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.175 [2024-11-20 00:00:29.461649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.175 qpair failed and we were unable to recover it. 00:35:55.175 [2024-11-20 00:00:29.471428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.175 [2024-11-20 00:00:29.471535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.175 [2024-11-20 00:00:29.471560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.175 [2024-11-20 00:00:29.471574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.175 [2024-11-20 00:00:29.471587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.175 [2024-11-20 00:00:29.471619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.175 qpair failed and we were unable to recover it. 00:35:55.175 [2024-11-20 00:00:29.481539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.175 [2024-11-20 00:00:29.481630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.175 [2024-11-20 00:00:29.481656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.175 [2024-11-20 00:00:29.481670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.175 [2024-11-20 00:00:29.481683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.175 [2024-11-20 00:00:29.481714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.175 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.491487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.491574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.491598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.491612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.491625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.435 [2024-11-20 00:00:29.491656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.435 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.501541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.501653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.501679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.501693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.501707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.435 [2024-11-20 00:00:29.501739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.435 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.511546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.511665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.511691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.511705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.511719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.435 [2024-11-20 00:00:29.511751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.435 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.521621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.521719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.521745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.521759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.521772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.435 [2024-11-20 00:00:29.521803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.435 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.531681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.531771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.531796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.531810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.531824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.435 [2024-11-20 00:00:29.531853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.435 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.541636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.541718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.541743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.541758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.541771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.435 [2024-11-20 00:00:29.541801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.435 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.551683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.551776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.551807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.551821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.551835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.435 [2024-11-20 00:00:29.551865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.435 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.561682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.561775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.561800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.561814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.561828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.435 [2024-11-20 00:00:29.561859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.435 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.571730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.571835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.571861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.571875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.571888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.435 [2024-11-20 00:00:29.571917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.435 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.581721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.581805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.581831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.581846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.581859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.435 [2024-11-20 00:00:29.581890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.435 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.591774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.591870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.591895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.591910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.591929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.435 [2024-11-20 00:00:29.591960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.435 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.601769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.601869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.601895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.601909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.601922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.435 [2024-11-20 00:00:29.601953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.435 qpair failed and we were unable to recover it. 00:35:55.435 [2024-11-20 00:00:29.611829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.435 [2024-11-20 00:00:29.611917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.435 [2024-11-20 00:00:29.611943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.435 [2024-11-20 00:00:29.611957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.435 [2024-11-20 00:00:29.611970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.612000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.621856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.621948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.621972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.621985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.621997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.622027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.631900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.632001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.632027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.632042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.632056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.632093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.641915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.642001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.642027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.642041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.642054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.642093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.651918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.652011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.652037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.652051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.652065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.652105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.661967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.662061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.662096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.662112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.662125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.662156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.672085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.672222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.672247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.672262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.672275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.672306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.682016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.682111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.682149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.682164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.682178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.682208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.692025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.692118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.692151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.692166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.692179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.692213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.702049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.702157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.702182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.702196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.702209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.702239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.712115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.712207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.712232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.712246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.712259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.712289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.722134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.722223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.722249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.722269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.722283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.722314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.732151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.732260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.732286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.732300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.732314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.436 [2024-11-20 00:00:29.732344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.436 qpair failed and we were unable to recover it. 00:35:55.436 [2024-11-20 00:00:29.742193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.436 [2024-11-20 00:00:29.742277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.436 [2024-11-20 00:00:29.742303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.436 [2024-11-20 00:00:29.742317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.436 [2024-11-20 00:00:29.742331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.437 [2024-11-20 00:00:29.742362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.437 qpair failed and we were unable to recover it. 00:35:55.696 [2024-11-20 00:00:29.752255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.696 [2024-11-20 00:00:29.752370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.696 [2024-11-20 00:00:29.752396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.696 [2024-11-20 00:00:29.752411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.696 [2024-11-20 00:00:29.752425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.696 [2024-11-20 00:00:29.752455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.696 qpair failed and we were unable to recover it. 00:35:55.696 [2024-11-20 00:00:29.762259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.696 [2024-11-20 00:00:29.762350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.696 [2024-11-20 00:00:29.762375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.696 [2024-11-20 00:00:29.762389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.696 [2024-11-20 00:00:29.762402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.696 [2024-11-20 00:00:29.762439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.696 qpair failed and we were unable to recover it. 00:35:55.696 [2024-11-20 00:00:29.772288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.696 [2024-11-20 00:00:29.772394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.696 [2024-11-20 00:00:29.772420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.696 [2024-11-20 00:00:29.772434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.696 [2024-11-20 00:00:29.772447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.696 [2024-11-20 00:00:29.772490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.696 qpair failed and we were unable to recover it. 00:35:55.696 [2024-11-20 00:00:29.782274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.696 [2024-11-20 00:00:29.782375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.696 [2024-11-20 00:00:29.782401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.696 [2024-11-20 00:00:29.782415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.696 [2024-11-20 00:00:29.782428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.696 [2024-11-20 00:00:29.782459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.696 qpair failed and we were unable to recover it. 00:35:55.696 [2024-11-20 00:00:29.792373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.696 [2024-11-20 00:00:29.792493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.696 [2024-11-20 00:00:29.792519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.696 [2024-11-20 00:00:29.792533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.696 [2024-11-20 00:00:29.792546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.696 [2024-11-20 00:00:29.792576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.696 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.802418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.802512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.802537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.802551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.802564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.802594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.812386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.812481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.812507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.812521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.812534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.812564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.822436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.822552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.822578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.822592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.822606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.822650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.832453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.832551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.832576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.832590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.832603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.832633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.842486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.842574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.842600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.842614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.842627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.842657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.852550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.852665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.852694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.852716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.852730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.852763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.862534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.862630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.862656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.862670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.862683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.862714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.872633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.872732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.872758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.872772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.872785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.872815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.882580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.882696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.882721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.882735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.882748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.882779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.892646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.892754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.892779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.892794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.892807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.892844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.902620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.902720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.902746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.902760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.902773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.902805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.912694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.912825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.912850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.912865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.912878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.912920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.697 [2024-11-20 00:00:29.922722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.697 [2024-11-20 00:00:29.922814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.697 [2024-11-20 00:00:29.922840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.697 [2024-11-20 00:00:29.922855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.697 [2024-11-20 00:00:29.922868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.697 [2024-11-20 00:00:29.922899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.697 qpair failed and we were unable to recover it. 00:35:55.698 [2024-11-20 00:00:29.932788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.698 [2024-11-20 00:00:29.932886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.698 [2024-11-20 00:00:29.932913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.698 [2024-11-20 00:00:29.932928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.698 [2024-11-20 00:00:29.932941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.698 [2024-11-20 00:00:29.932985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.698 qpair failed and we were unable to recover it. 00:35:55.698 [2024-11-20 00:00:29.942771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.698 [2024-11-20 00:00:29.942858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.698 [2024-11-20 00:00:29.942883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.698 [2024-11-20 00:00:29.942897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.698 [2024-11-20 00:00:29.942911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.698 [2024-11-20 00:00:29.942943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.698 qpair failed and we were unable to recover it. 00:35:55.698 [2024-11-20 00:00:29.952801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.698 [2024-11-20 00:00:29.952893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.698 [2024-11-20 00:00:29.952919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.698 [2024-11-20 00:00:29.952933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.698 [2024-11-20 00:00:29.952946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.698 [2024-11-20 00:00:29.952977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.698 qpair failed and we were unable to recover it. 00:35:55.698 [2024-11-20 00:00:29.962833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.698 [2024-11-20 00:00:29.962932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.698 [2024-11-20 00:00:29.962959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.698 [2024-11-20 00:00:29.962979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.698 [2024-11-20 00:00:29.962993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.698 [2024-11-20 00:00:29.963025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.698 qpair failed and we were unable to recover it. 00:35:55.698 [2024-11-20 00:00:29.972823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.698 [2024-11-20 00:00:29.972924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.698 [2024-11-20 00:00:29.972954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.698 [2024-11-20 00:00:29.972970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.698 [2024-11-20 00:00:29.972984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.698 [2024-11-20 00:00:29.973015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.698 qpair failed and we were unable to recover it. 00:35:55.698 [2024-11-20 00:00:29.982871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.698 [2024-11-20 00:00:29.982960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.698 [2024-11-20 00:00:29.982992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.698 [2024-11-20 00:00:29.983008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.698 [2024-11-20 00:00:29.983021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.698 [2024-11-20 00:00:29.983052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.698 qpair failed and we were unable to recover it. 00:35:55.698 [2024-11-20 00:00:29.992950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.698 [2024-11-20 00:00:29.993091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.698 [2024-11-20 00:00:29.993124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.698 [2024-11-20 00:00:29.993138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.698 [2024-11-20 00:00:29.993151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.698 [2024-11-20 00:00:29.993181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.698 qpair failed and we were unable to recover it. 00:35:55.698 [2024-11-20 00:00:30.002985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.698 [2024-11-20 00:00:30.003095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.698 [2024-11-20 00:00:30.003124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.698 [2024-11-20 00:00:30.003139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.698 [2024-11-20 00:00:30.003152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.698 [2024-11-20 00:00:30.003183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.698 qpair failed and we were unable to recover it. 00:35:55.960 [2024-11-20 00:00:30.013004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.960 [2024-11-20 00:00:30.013105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.960 [2024-11-20 00:00:30.013138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.960 [2024-11-20 00:00:30.013153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.960 [2024-11-20 00:00:30.013166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.960 [2024-11-20 00:00:30.013199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-11-20 00:00:30.023031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.960 [2024-11-20 00:00:30.023156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.960 [2024-11-20 00:00:30.023185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.960 [2024-11-20 00:00:30.023200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.960 [2024-11-20 00:00:30.023222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.960 [2024-11-20 00:00:30.023256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-11-20 00:00:30.033056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.960 [2024-11-20 00:00:30.033176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.960 [2024-11-20 00:00:30.033203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.960 [2024-11-20 00:00:30.033218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.960 [2024-11-20 00:00:30.033232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.960 [2024-11-20 00:00:30.033264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-11-20 00:00:30.043096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.960 [2024-11-20 00:00:30.043231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.960 [2024-11-20 00:00:30.043258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.960 [2024-11-20 00:00:30.043273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.960 [2024-11-20 00:00:30.043287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.960 [2024-11-20 00:00:30.043318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-11-20 00:00:30.053128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.961 [2024-11-20 00:00:30.053264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.961 [2024-11-20 00:00:30.053291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.961 [2024-11-20 00:00:30.053305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.961 [2024-11-20 00:00:30.053319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.961 [2024-11-20 00:00:30.053364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-11-20 00:00:30.063077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.961 [2024-11-20 00:00:30.063171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.961 [2024-11-20 00:00:30.063197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.961 [2024-11-20 00:00:30.063212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.961 [2024-11-20 00:00:30.063226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.961 [2024-11-20 00:00:30.063257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-11-20 00:00:30.073169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.961 [2024-11-20 00:00:30.073280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.961 [2024-11-20 00:00:30.073308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.961 [2024-11-20 00:00:30.073324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.961 [2024-11-20 00:00:30.073338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.961 [2024-11-20 00:00:30.073370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-11-20 00:00:30.083179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.961 [2024-11-20 00:00:30.083294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.961 [2024-11-20 00:00:30.083320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.961 [2024-11-20 00:00:30.083334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.961 [2024-11-20 00:00:30.083348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.961 [2024-11-20 00:00:30.083379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-11-20 00:00:30.093250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.961 [2024-11-20 00:00:30.093391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.961 [2024-11-20 00:00:30.093424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.961 [2024-11-20 00:00:30.093440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.961 [2024-11-20 00:00:30.093453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.961 [2024-11-20 00:00:30.093499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-11-20 00:00:30.103341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.961 [2024-11-20 00:00:30.103467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.961 [2024-11-20 00:00:30.103494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.961 [2024-11-20 00:00:30.103509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.961 [2024-11-20 00:00:30.103522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.961 [2024-11-20 00:00:30.103555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-11-20 00:00:30.113244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.961 [2024-11-20 00:00:30.113339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.961 [2024-11-20 00:00:30.113372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.961 [2024-11-20 00:00:30.113388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.961 [2024-11-20 00:00:30.113402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.961 [2024-11-20 00:00:30.113443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-11-20 00:00:30.123276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.961 [2024-11-20 00:00:30.123372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.961 [2024-11-20 00:00:30.123402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.961 [2024-11-20 00:00:30.123417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.961 [2024-11-20 00:00:30.123431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.961 [2024-11-20 00:00:30.123463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-11-20 00:00:30.133404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.961 [2024-11-20 00:00:30.133535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.961 [2024-11-20 00:00:30.133572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.961 [2024-11-20 00:00:30.133590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.961 [2024-11-20 00:00:30.133605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.961 [2024-11-20 00:00:30.133645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-11-20 00:00:30.143307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.961 [2024-11-20 00:00:30.143397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.961 [2024-11-20 00:00:30.143424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.961 [2024-11-20 00:00:30.143439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.961 [2024-11-20 00:00:30.143452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.961 [2024-11-20 00:00:30.143483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-11-20 00:00:30.153422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.961 [2024-11-20 00:00:30.153565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.961 [2024-11-20 00:00:30.153592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.961 [2024-11-20 00:00:30.153607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.961 [2024-11-20 00:00:30.153626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.961 [2024-11-20 00:00:30.153671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-11-20 00:00:30.163386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.961 [2024-11-20 00:00:30.163500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.962 [2024-11-20 00:00:30.163527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.962 [2024-11-20 00:00:30.163541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.962 [2024-11-20 00:00:30.163554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.962 [2024-11-20 00:00:30.163589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-11-20 00:00:30.173409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.962 [2024-11-20 00:00:30.173502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.962 [2024-11-20 00:00:30.173529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.962 [2024-11-20 00:00:30.173544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.962 [2024-11-20 00:00:30.173557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.962 [2024-11-20 00:00:30.173589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-11-20 00:00:30.183443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.962 [2024-11-20 00:00:30.183536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.962 [2024-11-20 00:00:30.183564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.962 [2024-11-20 00:00:30.183579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.962 [2024-11-20 00:00:30.183593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.962 [2024-11-20 00:00:30.183626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-11-20 00:00:30.193481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.962 [2024-11-20 00:00:30.193581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.962 [2024-11-20 00:00:30.193610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.962 [2024-11-20 00:00:30.193625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.962 [2024-11-20 00:00:30.193639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.962 [2024-11-20 00:00:30.193684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-11-20 00:00:30.203486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.962 [2024-11-20 00:00:30.203613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.962 [2024-11-20 00:00:30.203640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.962 [2024-11-20 00:00:30.203657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.962 [2024-11-20 00:00:30.203671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.962 [2024-11-20 00:00:30.203703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-11-20 00:00:30.213500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.962 [2024-11-20 00:00:30.213592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.962 [2024-11-20 00:00:30.213619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.962 [2024-11-20 00:00:30.213633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.962 [2024-11-20 00:00:30.213647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.962 [2024-11-20 00:00:30.213678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-11-20 00:00:30.223568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.962 [2024-11-20 00:00:30.223696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.962 [2024-11-20 00:00:30.223723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.962 [2024-11-20 00:00:30.223738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.962 [2024-11-20 00:00:30.223752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.962 [2024-11-20 00:00:30.223783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-11-20 00:00:30.233565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.962 [2024-11-20 00:00:30.233706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.962 [2024-11-20 00:00:30.233733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.962 [2024-11-20 00:00:30.233748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.962 [2024-11-20 00:00:30.233761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.962 [2024-11-20 00:00:30.233791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-11-20 00:00:30.243651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.962 [2024-11-20 00:00:30.243747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.962 [2024-11-20 00:00:30.243780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.962 [2024-11-20 00:00:30.243795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.962 [2024-11-20 00:00:30.243809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.962 [2024-11-20 00:00:30.243841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-11-20 00:00:30.253631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.962 [2024-11-20 00:00:30.253756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.962 [2024-11-20 00:00:30.253783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.962 [2024-11-20 00:00:30.253798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.962 [2024-11-20 00:00:30.253810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.962 [2024-11-20 00:00:30.253842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-11-20 00:00:30.263693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.962 [2024-11-20 00:00:30.263791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.962 [2024-11-20 00:00:30.263818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.962 [2024-11-20 00:00:30.263833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.962 [2024-11-20 00:00:30.263847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:55.962 [2024-11-20 00:00:30.263880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.962 qpair failed and we were unable to recover it. 00:35:56.226 [2024-11-20 00:00:30.273730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.226 [2024-11-20 00:00:30.273849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.226 [2024-11-20 00:00:30.273877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.226 [2024-11-20 00:00:30.273893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.226 [2024-11-20 00:00:30.273906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.226 [2024-11-20 00:00:30.273937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-11-20 00:00:30.283737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.226 [2024-11-20 00:00:30.283838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.226 [2024-11-20 00:00:30.283865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.226 [2024-11-20 00:00:30.283887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.226 [2024-11-20 00:00:30.283901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.226 [2024-11-20 00:00:30.283933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-11-20 00:00:30.293732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.226 [2024-11-20 00:00:30.293845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.226 [2024-11-20 00:00:30.293872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.226 [2024-11-20 00:00:30.293887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.226 [2024-11-20 00:00:30.293900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.226 [2024-11-20 00:00:30.293932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-11-20 00:00:30.303834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.226 [2024-11-20 00:00:30.303941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.226 [2024-11-20 00:00:30.303973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.226 [2024-11-20 00:00:30.303992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.226 [2024-11-20 00:00:30.304006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.226 [2024-11-20 00:00:30.304052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-11-20 00:00:30.313841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.226 [2024-11-20 00:00:30.313939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.226 [2024-11-20 00:00:30.313965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.226 [2024-11-20 00:00:30.313980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.226 [2024-11-20 00:00:30.313993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.226 [2024-11-20 00:00:30.314024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-11-20 00:00:30.323910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.226 [2024-11-20 00:00:30.324046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.226 [2024-11-20 00:00:30.324081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.226 [2024-11-20 00:00:30.324100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.226 [2024-11-20 00:00:30.324114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.226 [2024-11-20 00:00:30.324152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-11-20 00:00:30.333875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.226 [2024-11-20 00:00:30.333984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.226 [2024-11-20 00:00:30.334011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.226 [2024-11-20 00:00:30.334034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.226 [2024-11-20 00:00:30.334058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.226 [2024-11-20 00:00:30.334107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-11-20 00:00:30.343876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.226 [2024-11-20 00:00:30.343968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.226 [2024-11-20 00:00:30.343995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.226 [2024-11-20 00:00:30.344010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.226 [2024-11-20 00:00:30.344026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.226 [2024-11-20 00:00:30.344058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-11-20 00:00:30.353948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.226 [2024-11-20 00:00:30.354042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.226 [2024-11-20 00:00:30.354077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.226 [2024-11-20 00:00:30.354097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.226 [2024-11-20 00:00:30.354112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.226 [2024-11-20 00:00:30.354157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-11-20 00:00:30.363936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.226 [2024-11-20 00:00:30.364030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.226 [2024-11-20 00:00:30.364057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.226 [2024-11-20 00:00:30.364080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.226 [2024-11-20 00:00:30.364095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.226 [2024-11-20 00:00:30.364128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-11-20 00:00:30.373959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.226 [2024-11-20 00:00:30.374097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.226 [2024-11-20 00:00:30.374125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.226 [2024-11-20 00:00:30.374140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.226 [2024-11-20 00:00:30.374154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.226 [2024-11-20 00:00:30.374185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-11-20 00:00:30.384007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.384120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.384147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.384162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.384176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.384209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-11-20 00:00:30.394054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.394160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.394187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.394201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.394214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.394246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-11-20 00:00:30.404052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.404186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.404213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.404228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.404241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.404274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-11-20 00:00:30.414092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.414182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.414210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.414230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.414245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.414290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-11-20 00:00:30.424139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.424233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.424260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.424281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.424306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.424347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-11-20 00:00:30.434195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.434298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.434325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.434339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.434352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.434396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-11-20 00:00:30.444171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.444258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.444284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.444299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.444313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.444344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-11-20 00:00:30.454207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.454291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.454318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.454333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.454346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.454383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-11-20 00:00:30.464224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.464320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.464347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.464362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.464375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.464407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-11-20 00:00:30.474257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.474355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.474382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.474396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.474410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.474441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-11-20 00:00:30.484311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.484405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.484432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.484447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.484461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.484492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-11-20 00:00:30.494357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.494476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.494503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.494518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.494531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.494565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-11-20 00:00:30.504381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-11-20 00:00:30.504502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-11-20 00:00:30.504529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-11-20 00:00:30.504544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-11-20 00:00:30.504557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.227 [2024-11-20 00:00:30.504588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.228 [2024-11-20 00:00:30.514385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-11-20 00:00:30.514482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-11-20 00:00:30.514514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-11-20 00:00:30.514540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-11-20 00:00:30.514562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.228 [2024-11-20 00:00:30.514596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-11-20 00:00:30.524432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-11-20 00:00:30.524529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-11-20 00:00:30.524556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-11-20 00:00:30.524570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-11-20 00:00:30.524584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.228 [2024-11-20 00:00:30.524615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.489 [2024-11-20 00:00:30.534418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.489 [2024-11-20 00:00:30.534510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.489 [2024-11-20 00:00:30.534536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.489 [2024-11-20 00:00:30.534552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.489 [2024-11-20 00:00:30.534567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.489 [2024-11-20 00:00:30.534599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.489 qpair failed and we were unable to recover it. 00:35:56.489 [2024-11-20 00:00:30.544590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.489 [2024-11-20 00:00:30.544686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.489 [2024-11-20 00:00:30.544718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.489 [2024-11-20 00:00:30.544734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.489 [2024-11-20 00:00:30.544748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.489 [2024-11-20 00:00:30.544780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.489 qpair failed and we were unable to recover it. 00:35:56.489 [2024-11-20 00:00:30.554562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.489 [2024-11-20 00:00:30.554653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.489 [2024-11-20 00:00:30.554679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.489 [2024-11-20 00:00:30.554694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.489 [2024-11-20 00:00:30.554715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.489 [2024-11-20 00:00:30.554761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.489 qpair failed and we were unable to recover it. 00:35:56.489 [2024-11-20 00:00:30.564635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.489 [2024-11-20 00:00:30.564771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.489 [2024-11-20 00:00:30.564798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.489 [2024-11-20 00:00:30.564813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.489 [2024-11-20 00:00:30.564826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.489 [2024-11-20 00:00:30.564858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.489 qpair failed and we were unable to recover it. 00:35:56.489 [2024-11-20 00:00:30.574530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.574620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.574647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.574662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.574675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.574705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.584584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.584670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.584698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.584713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.584732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.584778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.594601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.594728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.594755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.594770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.594784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.594814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.604640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.604732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.604759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.604784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.604808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.604846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.614639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.614730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.614757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.614771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.614785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.614816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.624714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.624841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.624866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.624881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.624893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.624923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.634805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.634902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.634929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.634944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.634957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.634990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.644754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.644848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.644875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.644889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.644902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.644933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.654763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.654858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.654893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.654913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.654928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.654960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.664835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.664926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.664956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.664973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.664987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.665032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.674851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.674966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.674998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.675015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.675029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.675060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.684897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.684982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.685009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.685023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.685037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.685079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.694891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.490 [2024-11-20 00:00:30.694973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.490 [2024-11-20 00:00:30.694999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.490 [2024-11-20 00:00:30.695013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.490 [2024-11-20 00:00:30.695027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.490 [2024-11-20 00:00:30.695058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.490 qpair failed and we were unable to recover it. 00:35:56.490 [2024-11-20 00:00:30.704894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.491 [2024-11-20 00:00:30.705010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.491 [2024-11-20 00:00:30.705036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.491 [2024-11-20 00:00:30.705050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.491 [2024-11-20 00:00:30.705065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.491 [2024-11-20 00:00:30.705109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.491 qpair failed and we were unable to recover it. 00:35:56.491 [2024-11-20 00:00:30.714978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.491 [2024-11-20 00:00:30.715088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.491 [2024-11-20 00:00:30.715114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.491 [2024-11-20 00:00:30.715128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.491 [2024-11-20 00:00:30.715147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.491 [2024-11-20 00:00:30.715180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.491 qpair failed and we were unable to recover it. 00:35:56.491 [2024-11-20 00:00:30.724969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.491 [2024-11-20 00:00:30.725062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.491 [2024-11-20 00:00:30.725100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.491 [2024-11-20 00:00:30.725115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.491 [2024-11-20 00:00:30.725129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.491 [2024-11-20 00:00:30.725160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.491 qpair failed and we were unable to recover it. 00:35:56.491 [2024-11-20 00:00:30.734984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.491 [2024-11-20 00:00:30.735113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.491 [2024-11-20 00:00:30.735139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.491 [2024-11-20 00:00:30.735153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.491 [2024-11-20 00:00:30.735168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.491 [2024-11-20 00:00:30.735201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.491 qpair failed and we were unable to recover it. 00:35:56.491 [2024-11-20 00:00:30.745065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.491 [2024-11-20 00:00:30.745180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.491 [2024-11-20 00:00:30.745206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.491 [2024-11-20 00:00:30.745220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.491 [2024-11-20 00:00:30.745233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.491 [2024-11-20 00:00:30.745278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.491 qpair failed and we were unable to recover it. 00:35:56.491 [2024-11-20 00:00:30.755062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.491 [2024-11-20 00:00:30.755166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.491 [2024-11-20 00:00:30.755192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.491 [2024-11-20 00:00:30.755206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.491 [2024-11-20 00:00:30.755219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.491 [2024-11-20 00:00:30.755251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.491 qpair failed and we were unable to recover it. 00:35:56.491 [2024-11-20 00:00:30.765106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.491 [2024-11-20 00:00:30.765218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.491 [2024-11-20 00:00:30.765243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.491 [2024-11-20 00:00:30.765257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.491 [2024-11-20 00:00:30.765270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.491 [2024-11-20 00:00:30.765302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.491 qpair failed and we were unable to recover it. 00:35:56.491 [2024-11-20 00:00:30.775130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.491 [2024-11-20 00:00:30.775241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.491 [2024-11-20 00:00:30.775267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.491 [2024-11-20 00:00:30.775281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.491 [2024-11-20 00:00:30.775294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.491 [2024-11-20 00:00:30.775324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.491 qpair failed and we were unable to recover it. 00:35:56.491 [2024-11-20 00:00:30.785150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.491 [2024-11-20 00:00:30.785256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.491 [2024-11-20 00:00:30.785282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.491 [2024-11-20 00:00:30.785296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.491 [2024-11-20 00:00:30.785309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.491 [2024-11-20 00:00:30.785341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.491 qpair failed and we were unable to recover it. 00:35:56.491 [2024-11-20 00:00:30.795184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.491 [2024-11-20 00:00:30.795280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.491 [2024-11-20 00:00:30.795306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.491 [2024-11-20 00:00:30.795320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.491 [2024-11-20 00:00:30.795333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.491 [2024-11-20 00:00:30.795365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.491 qpair failed and we were unable to recover it. 00:35:56.751 [2024-11-20 00:00:30.805235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.751 [2024-11-20 00:00:30.805340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.751 [2024-11-20 00:00:30.805366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.751 [2024-11-20 00:00:30.805380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.751 [2024-11-20 00:00:30.805393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.751 [2024-11-20 00:00:30.805422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-11-20 00:00:30.815261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.751 [2024-11-20 00:00:30.815355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.751 [2024-11-20 00:00:30.815380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.751 [2024-11-20 00:00:30.815395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.751 [2024-11-20 00:00:30.815408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.751 [2024-11-20 00:00:30.815439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-11-20 00:00:30.825276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.751 [2024-11-20 00:00:30.825369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.751 [2024-11-20 00:00:30.825395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.751 [2024-11-20 00:00:30.825409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.751 [2024-11-20 00:00:30.825422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.751 [2024-11-20 00:00:30.825453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-11-20 00:00:30.835315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.751 [2024-11-20 00:00:30.835411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.751 [2024-11-20 00:00:30.835440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.751 [2024-11-20 00:00:30.835455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.751 [2024-11-20 00:00:30.835468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.751 [2024-11-20 00:00:30.835498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-11-20 00:00:30.845344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.751 [2024-11-20 00:00:30.845438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.751 [2024-11-20 00:00:30.845463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.751 [2024-11-20 00:00:30.845485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.751 [2024-11-20 00:00:30.845500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.751 [2024-11-20 00:00:30.845530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-11-20 00:00:30.855337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.751 [2024-11-20 00:00:30.855439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.751 [2024-11-20 00:00:30.855465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.751 [2024-11-20 00:00:30.855479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.751 [2024-11-20 00:00:30.855492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.751 [2024-11-20 00:00:30.855522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-11-20 00:00:30.865461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.751 [2024-11-20 00:00:30.865553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.751 [2024-11-20 00:00:30.865579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.751 [2024-11-20 00:00:30.865594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.751 [2024-11-20 00:00:30.865608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.751 [2024-11-20 00:00:30.865638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-11-20 00:00:30.875545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.751 [2024-11-20 00:00:30.875643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.751 [2024-11-20 00:00:30.875669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.751 [2024-11-20 00:00:30.875684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.751 [2024-11-20 00:00:30.875697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.751 [2024-11-20 00:00:30.875728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-11-20 00:00:30.885451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.751 [2024-11-20 00:00:30.885539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.751 [2024-11-20 00:00:30.885565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.751 [2024-11-20 00:00:30.885579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.751 [2024-11-20 00:00:30.885592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.751 [2024-11-20 00:00:30.885629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-11-20 00:00:30.895518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.751 [2024-11-20 00:00:30.895651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.751 [2024-11-20 00:00:30.895681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.751 [2024-11-20 00:00:30.895696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.751 [2024-11-20 00:00:30.895710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6070000b90 00:35:56.751 [2024-11-20 00:00:30.895741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.751 qpair failed and we were unable to recover it. 00:35:56.751 [2024-11-20 00:00:30.905519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.751 [2024-11-20 00:00:30.905615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.751 [2024-11-20 00:00:30.905649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.751 [2024-11-20 00:00:30.905669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.751 [2024-11-20 00:00:30.905683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6064000b90 00:35:56.751 [2024-11-20 00:00:30.905717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-11-20 00:00:30.915527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.752 [2024-11-20 00:00:30.915625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.752 [2024-11-20 00:00:30.915653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.752 [2024-11-20 00:00:30.915668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.752 [2024-11-20 00:00:30.915682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6064000b90 00:35:56.752 [2024-11-20 00:00:30.915713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-11-20 00:00:30.925521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.752 [2024-11-20 00:00:30.925612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.752 [2024-11-20 00:00:30.925639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.752 [2024-11-20 00:00:30.925654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.752 [2024-11-20 00:00:30.925667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6064000b90 00:35:56.752 [2024-11-20 00:00:30.925699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-11-20 00:00:30.925830] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:35:56.752 A controller has encountered a failure and is being reset. 00:35:56.752 [2024-11-20 00:00:30.935628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.752 [2024-11-20 00:00:30.935731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.752 [2024-11-20 00:00:30.935763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.752 [2024-11-20 00:00:30.935781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.752 [2024-11-20 00:00:30.935796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6068000b90 00:35:56.752 [2024-11-20 00:00:30.935843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-11-20 00:00:30.945583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.752 [2024-11-20 00:00:30.945677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.752 [2024-11-20 00:00:30.945705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.752 [2024-11-20 00:00:30.945720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.752 [2024-11-20 00:00:30.945734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6068000b90 00:35:56.752 [2024-11-20 00:00:30.945765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-11-20 00:00:30.955636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.752 [2024-11-20 00:00:30.955731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.752 [2024-11-20 00:00:30.955764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.752 [2024-11-20 00:00:30.955780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.752 [2024-11-20 00:00:30.955794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x129cb40 00:35:56.752 [2024-11-20 00:00:30.955826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 [2024-11-20 00:00:30.965708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.752 [2024-11-20 00:00:30.965802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.752 [2024-11-20 00:00:30.965830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.752 [2024-11-20 00:00:30.965845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.752 [2024-11-20 00:00:30.965858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x129cb40 00:35:56.752 [2024-11-20 00:00:30.965888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.752 qpair failed and we were unable to recover it. 00:35:56.752 Controller properly reset. 00:35:56.752 Initializing NVMe Controllers 00:35:56.752 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:56.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:56.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:56.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:56.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:56.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:56.752 Initialization complete. Launching workers. 00:35:56.752 Starting thread on core 1 00:35:56.752 Starting thread on core 2 00:35:56.752 Starting thread on core 3 00:35:56.752 Starting thread on core 0 00:35:56.752 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:56.752 00:35:56.752 real 0m10.982s 00:35:56.752 user 0m19.208s 00:35:56.752 sys 0m5.020s 00:35:56.752 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.752 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.752 ************************************ 00:35:56.752 END TEST nvmf_target_disconnect_tc2 00:35:56.752 ************************************ 00:35:56.752 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:56.752 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:56.752 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:56.752 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:56.752 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:56.752 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:56.752 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:56.752 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:56.752 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:56.752 rmmod nvme_tcp 00:35:57.028 rmmod nvme_fabrics 00:35:57.028 rmmod nvme_keyring 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 342951 ']' 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 342951 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 342951 ']' 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 342951 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 342951 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 342951' 00:35:57.028 killing process with pid 342951 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 342951 00:35:57.028 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 342951 00:35:57.293 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:57.293 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:57.293 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:57.293 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:35:57.293 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:35:57.293 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:57.293 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:35:57.293 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:57.293 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:57.293 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:57.293 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:57.293 00:00:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.202 00:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:59.202 00:35:59.202 real 0m15.864s 00:35:59.202 user 0m46.127s 00:35:59.202 sys 0m7.007s 00:35:59.202 00:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:59.202 00:00:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:59.202 ************************************ 00:35:59.202 END TEST nvmf_target_disconnect 00:35:59.202 ************************************ 00:35:59.202 00:00:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:59.202 00:35:59.202 real 6m43.985s 00:35:59.202 user 17m15.129s 00:35:59.202 sys 1m26.279s 00:35:59.202 00:00:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:59.202 00:00:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.202 ************************************ 00:35:59.202 END TEST nvmf_host 00:35:59.202 ************************************ 00:35:59.202 00:00:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:35:59.202 00:00:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:35:59.202 00:00:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:59.202 00:00:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:59.202 00:00:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:59.203 00:00:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:59.203 ************************************ 00:35:59.203 START TEST nvmf_target_core_interrupt_mode 00:35:59.203 ************************************ 00:35:59.203 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:59.462 * Looking for test storage... 00:35:59.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:59.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.462 --rc genhtml_branch_coverage=1 00:35:59.462 --rc genhtml_function_coverage=1 00:35:59.462 --rc genhtml_legend=1 00:35:59.462 --rc geninfo_all_blocks=1 00:35:59.462 --rc geninfo_unexecuted_blocks=1 00:35:59.462 00:35:59.462 ' 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:59.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.462 --rc genhtml_branch_coverage=1 00:35:59.462 --rc genhtml_function_coverage=1 00:35:59.462 --rc genhtml_legend=1 00:35:59.462 --rc geninfo_all_blocks=1 00:35:59.462 --rc geninfo_unexecuted_blocks=1 00:35:59.462 00:35:59.462 ' 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:59.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.462 --rc genhtml_branch_coverage=1 00:35:59.462 --rc genhtml_function_coverage=1 00:35:59.462 --rc genhtml_legend=1 00:35:59.462 --rc geninfo_all_blocks=1 00:35:59.462 --rc geninfo_unexecuted_blocks=1 00:35:59.462 00:35:59.462 ' 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:59.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.462 --rc genhtml_branch_coverage=1 00:35:59.462 --rc genhtml_function_coverage=1 00:35:59.462 --rc genhtml_legend=1 00:35:59.462 --rc geninfo_all_blocks=1 00:35:59.462 --rc geninfo_unexecuted_blocks=1 00:35:59.462 00:35:59.462 ' 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:59.462 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:59.463 ************************************ 00:35:59.463 START TEST nvmf_abort 00:35:59.463 ************************************ 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:59.463 * Looking for test storage... 00:35:59.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:35:59.463 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:59.722 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:59.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.723 --rc genhtml_branch_coverage=1 00:35:59.723 --rc genhtml_function_coverage=1 00:35:59.723 --rc genhtml_legend=1 00:35:59.723 --rc geninfo_all_blocks=1 00:35:59.723 --rc geninfo_unexecuted_blocks=1 00:35:59.723 00:35:59.723 ' 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:59.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.723 --rc genhtml_branch_coverage=1 00:35:59.723 --rc genhtml_function_coverage=1 00:35:59.723 --rc genhtml_legend=1 00:35:59.723 --rc geninfo_all_blocks=1 00:35:59.723 --rc geninfo_unexecuted_blocks=1 00:35:59.723 00:35:59.723 ' 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:59.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.723 --rc genhtml_branch_coverage=1 00:35:59.723 --rc genhtml_function_coverage=1 00:35:59.723 --rc genhtml_legend=1 00:35:59.723 --rc geninfo_all_blocks=1 00:35:59.723 --rc geninfo_unexecuted_blocks=1 00:35:59.723 00:35:59.723 ' 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:59.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.723 --rc genhtml_branch_coverage=1 00:35:59.723 --rc genhtml_function_coverage=1 00:35:59.723 --rc genhtml_legend=1 00:35:59.723 --rc geninfo_all_blocks=1 00:35:59.723 --rc geninfo_unexecuted_blocks=1 00:35:59.723 00:35:59.723 ' 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:59.723 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:35:59.724 00:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:01.625 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:01.626 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:01.626 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:01.626 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:01.626 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:01.626 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:01.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:01.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:36:01.893 00:36:01.893 --- 10.0.0.2 ping statistics --- 00:36:01.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.893 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:01.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:01.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:36:01.893 00:36:01.893 --- 10.0.0.1 ping statistics --- 00:36:01.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.893 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=345761 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 345761 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 345761 ']' 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:01.893 00:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:01.893 [2024-11-20 00:00:36.028617] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:01.893 [2024-11-20 00:00:36.029712] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:36:01.893 [2024-11-20 00:00:36.029779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.893 [2024-11-20 00:00:36.102013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:01.893 [2024-11-20 00:00:36.147987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.893 [2024-11-20 00:00:36.148039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.893 [2024-11-20 00:00:36.148073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.893 [2024-11-20 00:00:36.148087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.893 [2024-11-20 00:00:36.148097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.893 [2024-11-20 00:00:36.149607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:01.893 [2024-11-20 00:00:36.149668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:01.893 [2024-11-20 00:00:36.149671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:02.184 [2024-11-20 00:00:36.234122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:02.184 [2024-11-20 00:00:36.234324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:02.184 [2024-11-20 00:00:36.234329] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:02.184 [2024-11-20 00:00:36.234611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:02.184 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:02.184 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:02.184 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:02.184 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:02.184 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.185 [2024-11-20 00:00:36.290463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.185 Malloc0 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.185 Delay0 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.185 [2024-11-20 00:00:36.362703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.185 00:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:02.445 [2024-11-20 00:00:36.513170] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:04.997 Initializing NVMe Controllers 00:36:04.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:04.997 controller IO queue size 128 less than required 00:36:04.997 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:04.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:04.997 Initialization complete. Launching workers. 00:36:04.997 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29419 00:36:04.997 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29476, failed to submit 66 00:36:04.997 success 29419, unsuccessful 57, failed 0 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:04.997 rmmod nvme_tcp 00:36:04.997 rmmod nvme_fabrics 00:36:04.997 rmmod nvme_keyring 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:04.997 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:04.998 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 345761 ']' 00:36:04.998 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 345761 00:36:04.998 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 345761 ']' 00:36:04.998 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 345761 00:36:04.998 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:04.998 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:04.998 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 345761 00:36:04.998 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:04.998 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:04.998 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 345761' 00:36:04.998 killing process with pid 345761 00:36:04.998 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 345761 00:36:04.998 00:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 345761 00:36:04.998 00:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:04.998 00:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:04.998 00:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:04.998 00:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:04.998 00:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:04.998 00:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:04.998 00:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:04.998 00:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:04.998 00:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:04.999 00:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.999 00:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.999 00:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.913 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:06.913 00:36:06.913 real 0m7.423s 00:36:06.913 user 0m9.795s 00:36:06.913 sys 0m2.894s 00:36:06.913 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.913 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.913 ************************************ 00:36:06.913 END TEST nvmf_abort 00:36:06.913 ************************************ 00:36:06.914 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:06.914 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:06.914 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.914 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:06.914 ************************************ 00:36:06.914 START TEST nvmf_ns_hotplug_stress 00:36:06.914 ************************************ 00:36:06.914 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:06.914 * Looking for test storage... 00:36:06.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:06.914 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:06.914 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:36:06.914 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:07.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.171 --rc genhtml_branch_coverage=1 00:36:07.171 --rc genhtml_function_coverage=1 00:36:07.171 --rc genhtml_legend=1 00:36:07.171 --rc geninfo_all_blocks=1 00:36:07.171 --rc geninfo_unexecuted_blocks=1 00:36:07.171 00:36:07.171 ' 00:36:07.171 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.172 --rc genhtml_branch_coverage=1 00:36:07.172 --rc genhtml_function_coverage=1 00:36:07.172 --rc genhtml_legend=1 00:36:07.172 --rc geninfo_all_blocks=1 00:36:07.172 --rc geninfo_unexecuted_blocks=1 00:36:07.172 00:36:07.172 ' 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.172 --rc genhtml_branch_coverage=1 00:36:07.172 --rc genhtml_function_coverage=1 00:36:07.172 --rc genhtml_legend=1 00:36:07.172 --rc geninfo_all_blocks=1 00:36:07.172 --rc geninfo_unexecuted_blocks=1 00:36:07.172 00:36:07.172 ' 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.172 --rc genhtml_branch_coverage=1 00:36:07.172 --rc genhtml_function_coverage=1 00:36:07.172 --rc genhtml_legend=1 00:36:07.172 --rc geninfo_all_blocks=1 00:36:07.172 --rc geninfo_unexecuted_blocks=1 00:36:07.172 00:36:07.172 ' 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:07.172 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:07.173 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:07.173 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:07.173 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:07.173 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:07.173 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.173 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:07.173 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.173 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:07.173 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:07.173 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:07.173 00:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:09.705 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:09.705 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:09.705 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:09.706 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:09.706 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:09.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:09.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:36:09.706 00:36:09.706 --- 10.0.0.2 ping statistics --- 00:36:09.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:09.706 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:09.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:09.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:36:09.706 00:36:09.706 --- 10.0.0.1 ping statistics --- 00:36:09.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:09.706 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=347979 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 347979 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 347979 ']' 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:09.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:09.706 [2024-11-20 00:00:43.675804] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:09.706 [2024-11-20 00:00:43.676928] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:36:09.706 [2024-11-20 00:00:43.677003] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:09.706 [2024-11-20 00:00:43.751806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:09.706 [2024-11-20 00:00:43.798601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:09.706 [2024-11-20 00:00:43.798655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:09.706 [2024-11-20 00:00:43.798684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:09.706 [2024-11-20 00:00:43.798695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:09.706 [2024-11-20 00:00:43.798704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:09.706 [2024-11-20 00:00:43.800211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:09.706 [2024-11-20 00:00:43.800264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:09.706 [2024-11-20 00:00:43.800267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:09.706 [2024-11-20 00:00:43.889917] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:09.706 [2024-11-20 00:00:43.890144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:09.706 [2024-11-20 00:00:43.890148] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:09.706 [2024-11-20 00:00:43.890450] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:09.706 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:09.707 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:09.707 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:09.707 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:09.707 00:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:09.965 [2024-11-20 00:00:44.185021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.965 00:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:10.223 00:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:10.481 [2024-11-20 00:00:44.721442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.481 00:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:10.738 00:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:10.997 Malloc0 00:36:10.997 00:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:11.566 Delay0 00:36:11.566 00:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:11.566 00:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:11.824 NULL1 00:36:11.824 00:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:12.403 00:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=348398 00:36:12.403 00:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:12.403 00:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:12.404 00:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:13.345 Read completed with error (sct=0, sc=11) 00:36:13.345 00:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:13.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:13.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:13.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:13.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:13.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:13.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:13.603 00:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:13.603 00:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:13.861 true 00:36:13.861 00:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:13.861 00:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:14.794 00:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:15.050 00:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:15.050 00:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:15.307 true 00:36:15.307 00:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:15.307 00:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:15.564 00:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:15.822 00:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:15.822 00:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:16.080 true 00:36:16.080 00:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:16.080 00:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:16.338 00:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.596 00:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:16.596 00:00:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:16.854 true 00:36:16.854 00:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:16.854 00:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:17.789 00:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.047 00:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:18.047 00:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:18.305 true 00:36:18.305 00:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:18.305 00:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:18.563 00:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.824 00:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:18.824 00:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:19.085 true 00:36:19.085 00:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:19.085 00:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.345 00:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.603 00:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:19.603 00:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:19.860 true 00:36:19.860 00:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:19.860 00:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:20.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:20.797 00:00:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:20.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:21.055 00:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:21.055 00:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:21.314 true 00:36:21.314 00:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:21.314 00:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.571 00:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.830 00:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:21.830 00:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:22.088 true 00:36:22.088 00:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:22.088 00:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:22.346 00:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.602 00:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:22.602 00:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:22.860 true 00:36:22.860 00:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:22.860 00:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.794 00:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.053 00:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:24.053 00:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:24.311 true 00:36:24.311 00:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:24.311 00:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.877 00:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.877 00:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:24.877 00:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:25.135 true 00:36:25.135 00:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:25.135 00:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.701 00:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.701 00:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:25.701 00:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:25.959 true 00:36:26.218 00:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:26.218 00:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.152 00:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.152 00:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:27.152 00:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:27.415 true 00:36:27.415 00:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:27.415 00:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.985 00:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.985 00:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:27.985 00:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:28.244 true 00:36:28.244 00:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:28.244 00:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.503 00:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.068 00:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:29.068 00:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:29.068 true 00:36:29.068 00:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:29.068 00:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.002 00:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.260 00:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:30.260 00:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:30.518 true 00:36:30.518 00:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:30.519 00:01:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.776 00:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.034 00:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:31.034 00:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:31.293 true 00:36:31.293 00:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:31.293 00:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.552 00:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.810 00:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:31.810 00:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:32.383 true 00:36:32.383 00:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:32.383 00:01:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.323 00:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.323 00:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:33.323 00:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:33.581 true 00:36:33.581 00:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:33.581 00:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.839 00:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.138 00:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:34.138 00:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:34.422 true 00:36:34.422 00:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:34.422 00:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.681 00:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.939 00:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:34.939 00:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:35.197 true 00:36:35.455 00:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:35.455 00:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:36.388 00:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.646 00:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:36.646 00:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:36.904 true 00:36:36.904 00:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:36.904 00:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.162 00:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.420 00:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:37.420 00:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:37.679 true 00:36:37.679 00:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:37.679 00:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.937 00:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.195 00:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:38.195 00:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:38.453 true 00:36:38.453 00:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:38.453 00:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.386 00:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.644 00:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:39.644 00:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:39.902 true 00:36:39.902 00:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:39.902 00:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.160 00:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.724 00:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:40.724 00:01:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:40.724 true 00:36:40.724 00:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:40.724 00:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.982 00:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.240 00:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:41.241 00:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:41.500 true 00:36:41.760 00:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:41.760 00:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.695 Initializing NVMe Controllers 00:36:42.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:42.695 Controller IO queue size 128, less than required. 00:36:42.695 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:42.695 Controller IO queue size 128, less than required. 00:36:42.695 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:42.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:42.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:42.695 Initialization complete. Launching workers. 00:36:42.695 ======================================================== 00:36:42.695 Latency(us) 00:36:42.695 Device Information : IOPS MiB/s Average min max 00:36:42.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 365.23 0.18 140081.57 3669.47 1061349.40 00:36:42.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8363.55 4.08 15307.51 2902.31 450230.78 00:36:42.695 ======================================================== 00:36:42.695 Total : 8728.78 4.26 20528.31 2902.31 1061349.40 00:36:42.695 00:36:42.695 00:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.955 00:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:42.955 00:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:42.955 true 00:36:43.214 00:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 348398 00:36:43.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (348398) - No such process 00:36:43.214 00:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 348398 00:36:43.214 00:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.472 00:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:43.731 00:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:43.731 00:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:43.731 00:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:43.731 00:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:43.731 00:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:43.990 null0 00:36:43.990 00:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:43.990 00:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:43.990 00:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:44.248 null1 00:36:44.248 00:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:44.248 00:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:44.248 00:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:44.506 null2 00:36:44.506 00:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:44.506 00:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:44.506 00:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:44.764 null3 00:36:44.764 00:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:44.764 00:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:44.764 00:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:45.039 null4 00:36:45.039 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:45.039 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:45.039 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:45.299 null5 00:36:45.299 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:45.299 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:45.300 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:45.559 null6 00:36:45.559 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:45.559 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:45.559 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:45.818 null7 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:45.818 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:45.819 00:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:45.819 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:45.819 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:45.819 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:45.819 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 352393 352394 352396 352398 352400 352402 352404 352406 00:36:45.819 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:45.819 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:46.076 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:46.076 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:46.076 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:46.076 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:46.076 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.076 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:46.076 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:46.076 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:46.335 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:46.594 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:46.594 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:46.594 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:46.594 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.594 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:46.594 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:46.594 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:46.852 00:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.111 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:47.112 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.112 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.112 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:47.370 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:47.370 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:47.370 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:47.370 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:47.370 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.370 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:47.370 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:47.370 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:47.628 00:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:47.886 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:47.886 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:47.886 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:47.886 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:47.886 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:47.886 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.886 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:47.886 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:48.144 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.144 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.144 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:48.144 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.144 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.144 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:48.144 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.144 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.144 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:48.144 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.145 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:48.404 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.404 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:48.404 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:48.404 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:48.404 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:48.662 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:48.663 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:48.663 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:48.953 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.953 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.953 00:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:48.953 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:49.211 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:49.211 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.211 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:49.211 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:49.211 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:49.211 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:49.211 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:49.211 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.469 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:49.728 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:49.728 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:49.728 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.728 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:49.728 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:49.728 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:49.728 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:49.728 00:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.989 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:50.247 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:50.247 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:50.247 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.247 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:50.247 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:50.247 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:50.247 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:50.247 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:50.812 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.813 00:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:51.072 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:51.072 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:51.072 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.072 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:51.072 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:51.072 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:51.072 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:51.072 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.330 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.331 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:51.331 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.331 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.331 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:51.588 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:51.588 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:51.588 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.588 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:51.589 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:51.589 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:51.589 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:51.589 00:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:51.853 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:51.854 rmmod nvme_tcp 00:36:51.854 rmmod nvme_fabrics 00:36:51.854 rmmod nvme_keyring 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 347979 ']' 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 347979 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 347979 ']' 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 347979 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:51.854 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 347979 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 347979' 00:36:52.115 killing process with pid 347979 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 347979 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 347979 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:52.115 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.116 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:52.116 00:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:54.717 00:36:54.717 real 0m47.304s 00:36:54.717 user 3m18.054s 00:36:54.717 sys 0m21.759s 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:54.717 ************************************ 00:36:54.717 END TEST nvmf_ns_hotplug_stress 00:36:54.717 ************************************ 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:54.717 ************************************ 00:36:54.717 START TEST nvmf_delete_subsystem 00:36:54.717 ************************************ 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:54.717 * Looking for test storage... 00:36:54.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:54.717 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:54.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.718 --rc genhtml_branch_coverage=1 00:36:54.718 --rc genhtml_function_coverage=1 00:36:54.718 --rc genhtml_legend=1 00:36:54.718 --rc geninfo_all_blocks=1 00:36:54.718 --rc geninfo_unexecuted_blocks=1 00:36:54.718 00:36:54.718 ' 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:54.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.718 --rc genhtml_branch_coverage=1 00:36:54.718 --rc genhtml_function_coverage=1 00:36:54.718 --rc genhtml_legend=1 00:36:54.718 --rc geninfo_all_blocks=1 00:36:54.718 --rc geninfo_unexecuted_blocks=1 00:36:54.718 00:36:54.718 ' 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:54.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.718 --rc genhtml_branch_coverage=1 00:36:54.718 --rc genhtml_function_coverage=1 00:36:54.718 --rc genhtml_legend=1 00:36:54.718 --rc geninfo_all_blocks=1 00:36:54.718 --rc geninfo_unexecuted_blocks=1 00:36:54.718 00:36:54.718 ' 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:54.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.718 --rc genhtml_branch_coverage=1 00:36:54.718 --rc genhtml_function_coverage=1 00:36:54.718 --rc genhtml_legend=1 00:36:54.718 --rc geninfo_all_blocks=1 00:36:54.718 --rc geninfo_unexecuted_blocks=1 00:36:54.718 00:36:54.718 ' 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:36:54.718 00:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:56.629 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:56.630 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:56.630 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:56.630 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:56.630 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:56.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:56.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:36:56.630 00:36:56.630 --- 10.0.0.2 ping statistics --- 00:36:56.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.630 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:36:56.630 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:56.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:56.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:36:56.630 00:36:56.630 --- 10.0.0.1 ping statistics --- 00:36:56.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.630 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=355160 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 355160 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 355160 ']' 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:56.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:56.631 00:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:56.957 [2024-11-20 00:01:30.964608] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:56.957 [2024-11-20 00:01:30.965730] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:36:56.957 [2024-11-20 00:01:30.965784] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:56.957 [2024-11-20 00:01:31.044229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:56.957 [2024-11-20 00:01:31.091641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:56.957 [2024-11-20 00:01:31.091693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:56.957 [2024-11-20 00:01:31.091710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:56.957 [2024-11-20 00:01:31.091723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:56.957 [2024-11-20 00:01:31.091735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:56.957 [2024-11-20 00:01:31.093216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:56.957 [2024-11-20 00:01:31.093222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:56.957 [2024-11-20 00:01:31.186474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:56.958 [2024-11-20 00:01:31.186532] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:56.958 [2024-11-20 00:01:31.186778] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:56.958 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:56.958 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:36:56.958 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:56.958 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:56.958 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:56.958 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:56.958 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:56.958 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.958 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.281 [2024-11-20 00:01:31.241908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.281 [2024-11-20 00:01:31.262217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.281 NULL1 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.281 Delay0 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=355299 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:57.281 00:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:36:57.281 [2024-11-20 00:01:31.341688] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:59.186 00:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:59.187 00:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.187 00:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 [2024-11-20 00:01:33.423507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc1f0000c40 is same with the state(6) to be set 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 starting I/O failed: -6 00:36:59.187 [2024-11-20 00:01:33.424187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1461e70 is same with the state(6) to be set 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Write completed with error (sct=0, sc=8) 00:36:59.187 Read completed with error (sct=0, sc=8) 00:36:59.188 Write completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Write completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Write completed with error (sct=0, sc=8) 00:36:59.188 Write completed with error (sct=0, sc=8) 00:36:59.188 Write completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Write completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Read completed with error (sct=0, sc=8) 00:36:59.188 Write completed with error (sct=0, sc=8) 00:36:59.188 Write completed with error (sct=0, sc=8) 00:37:00.120 [2024-11-20 00:01:34.397183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146f5b0 is same with the state(6) to be set 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 [2024-11-20 00:01:34.424181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc1f000d7e0 is same with the state(6) to be set 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 [2024-11-20 00:01:34.424373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc1f000d020 is same with the state(6) to be set 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 [2024-11-20 00:01:34.424735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14613f0 is same with the state(6) to be set 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Write completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 Read completed with error (sct=0, sc=8) 00:37:00.120 [2024-11-20 00:01:34.425237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1461b40 is same with the state(6) to be set 00:37:00.120 Initializing NVMe Controllers 00:37:00.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:00.120 Controller IO queue size 128, less than required. 00:37:00.120 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:00.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:00.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:00.120 Initialization complete. Launching workers. 00:37:00.120 ======================================================== 00:37:00.120 Latency(us) 00:37:00.120 Device Information : IOPS MiB/s Average min max 00:37:00.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.17 0.08 906965.78 390.75 1013746.00 00:37:00.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.19 0.08 915341.65 557.49 1044492.96 00:37:00.121 ======================================================== 00:37:00.121 Total : 327.36 0.16 911115.64 390.75 1044492.96 00:37:00.121 00:37:00.121 [2024-11-20 00:01:34.425627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146f5b0 (9): Bad file descriptor 00:37:00.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:00.121 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.121 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:00.121 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 355299 00:37:00.121 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 355299 00:37:00.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (355299) - No such process 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 355299 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 355299 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 355299 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.687 [2024-11-20 00:01:34.946145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=355702 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 355702 00:37:00.687 00:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:00.943 [2024-11-20 00:01:35.004826] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:01.200 00:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:01.200 00:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 355702 00:37:01.200 00:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:01.765 00:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:01.765 00:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 355702 00:37:01.765 00:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:02.331 00:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:02.331 00:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 355702 00:37:02.331 00:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:02.896 00:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:02.896 00:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 355702 00:37:02.896 00:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:03.460 00:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:03.460 00:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 355702 00:37:03.460 00:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:03.717 00:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:03.717 00:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 355702 00:37:03.717 00:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:03.976 Initializing NVMe Controllers 00:37:03.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:03.976 Controller IO queue size 128, less than required. 00:37:03.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:03.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:03.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:03.976 Initialization complete. Launching workers. 00:37:03.976 ======================================================== 00:37:03.976 Latency(us) 00:37:03.976 Device Information : IOPS MiB/s Average min max 00:37:03.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004460.96 1000232.29 1010819.00 00:37:03.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005772.67 1000229.77 1041851.60 00:37:03.976 ======================================================== 00:37:03.976 Total : 256.00 0.12 1005116.81 1000229.77 1041851.60 00:37:03.976 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 355702 00:37:04.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (355702) - No such process 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 355702 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:04.234 rmmod nvme_tcp 00:37:04.234 rmmod nvme_fabrics 00:37:04.234 rmmod nvme_keyring 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 355160 ']' 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 355160 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 355160 ']' 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 355160 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:04.234 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 355160 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 355160' 00:37:04.493 killing process with pid 355160 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 355160 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 355160 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:04.493 00:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.033 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:07.033 00:37:07.033 real 0m12.328s 00:37:07.033 user 0m24.660s 00:37:07.033 sys 0m3.684s 00:37:07.033 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:07.033 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:07.033 ************************************ 00:37:07.033 END TEST nvmf_delete_subsystem 00:37:07.033 ************************************ 00:37:07.033 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:07.034 ************************************ 00:37:07.034 START TEST nvmf_host_management 00:37:07.034 ************************************ 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:07.034 * Looking for test storage... 00:37:07.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:07.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.034 --rc genhtml_branch_coverage=1 00:37:07.034 --rc genhtml_function_coverage=1 00:37:07.034 --rc genhtml_legend=1 00:37:07.034 --rc geninfo_all_blocks=1 00:37:07.034 --rc geninfo_unexecuted_blocks=1 00:37:07.034 00:37:07.034 ' 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:07.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.034 --rc genhtml_branch_coverage=1 00:37:07.034 --rc genhtml_function_coverage=1 00:37:07.034 --rc genhtml_legend=1 00:37:07.034 --rc geninfo_all_blocks=1 00:37:07.034 --rc geninfo_unexecuted_blocks=1 00:37:07.034 00:37:07.034 ' 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:07.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.034 --rc genhtml_branch_coverage=1 00:37:07.034 --rc genhtml_function_coverage=1 00:37:07.034 --rc genhtml_legend=1 00:37:07.034 --rc geninfo_all_blocks=1 00:37:07.034 --rc geninfo_unexecuted_blocks=1 00:37:07.034 00:37:07.034 ' 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:07.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.034 --rc genhtml_branch_coverage=1 00:37:07.034 --rc genhtml_function_coverage=1 00:37:07.034 --rc genhtml_legend=1 00:37:07.034 --rc geninfo_all_blocks=1 00:37:07.034 --rc geninfo_unexecuted_blocks=1 00:37:07.034 00:37:07.034 ' 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:07.034 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:07.035 00:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.035 00:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:07.035 00:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:07.035 00:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:07.035 00:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:08.951 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:08.951 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:08.951 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:08.952 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:08.952 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:08.952 00:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:08.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:08.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:37:08.952 00:37:08.952 --- 10.0.0.2 ping statistics --- 00:37:08.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.952 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:08.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:08.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:37:08.952 00:37:08.952 --- 10.0.0.1 ping statistics --- 00:37:08.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.952 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=358038 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 358038 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 358038 ']' 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:08.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:08.952 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:08.952 [2024-11-20 00:01:43.113364] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:08.952 [2024-11-20 00:01:43.114467] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:37:08.952 [2024-11-20 00:01:43.114539] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:08.952 [2024-11-20 00:01:43.186285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:08.952 [2024-11-20 00:01:43.232818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:08.952 [2024-11-20 00:01:43.232880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:08.952 [2024-11-20 00:01:43.232895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:08.952 [2024-11-20 00:01:43.232920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:08.952 [2024-11-20 00:01:43.232930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:08.952 [2024-11-20 00:01:43.234470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:08.952 [2024-11-20 00:01:43.234523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:08.952 [2024-11-20 00:01:43.234576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:08.952 [2024-11-20 00:01:43.234579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:09.210 [2024-11-20 00:01:43.317406] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:09.211 [2024-11-20 00:01:43.317624] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:09.211 [2024-11-20 00:01:43.317946] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:09.211 [2024-11-20 00:01:43.318588] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:09.211 [2024-11-20 00:01:43.318827] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:09.211 [2024-11-20 00:01:43.371253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:09.211 Malloc0 00:37:09.211 [2024-11-20 00:01:43.439490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=358082 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 358082 /var/tmp/bdevperf.sock 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 358082 ']' 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:09.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:09.211 { 00:37:09.211 "params": { 00:37:09.211 "name": "Nvme$subsystem", 00:37:09.211 "trtype": "$TEST_TRANSPORT", 00:37:09.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:09.211 "adrfam": "ipv4", 00:37:09.211 "trsvcid": "$NVMF_PORT", 00:37:09.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:09.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:09.211 "hdgst": ${hdgst:-false}, 00:37:09.211 "ddgst": ${ddgst:-false} 00:37:09.211 }, 00:37:09.211 "method": "bdev_nvme_attach_controller" 00:37:09.211 } 00:37:09.211 EOF 00:37:09.211 )") 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:09.211 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:09.211 "params": { 00:37:09.211 "name": "Nvme0", 00:37:09.211 "trtype": "tcp", 00:37:09.211 "traddr": "10.0.0.2", 00:37:09.211 "adrfam": "ipv4", 00:37:09.211 "trsvcid": "4420", 00:37:09.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:09.211 "hdgst": false, 00:37:09.211 "ddgst": false 00:37:09.211 }, 00:37:09.211 "method": "bdev_nvme_attach_controller" 00:37:09.211 }' 00:37:09.469 [2024-11-20 00:01:43.522697] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:37:09.469 [2024-11-20 00:01:43.522774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358082 ] 00:37:09.469 [2024-11-20 00:01:43.592566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.469 [2024-11-20 00:01:43.638950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:09.726 Running I/O for 10 seconds... 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.726 00:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:09.726 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.726 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:09.726 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:09.726 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:09.983 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:09.983 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:09.983 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:09.983 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.983 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:09.983 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:10.245 [2024-11-20 00:01:44.331313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 [2024-11-20 00:01:44.331788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18246c0 is same with the state(6) to be set 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.245 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:10.245 [2024-11-20 00:01:44.336851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.336893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.336923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.336939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.336955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.336969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.336984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.336998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.337014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.337028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.337043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.337056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.337087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.337103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.337125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.337139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.337154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.337167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.337182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.337196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.337211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.337225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.337240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.337254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.337269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.337282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.337297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.337311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.337326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.245 [2024-11-20 00:01:44.337340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.245 [2024-11-20 00:01:44.337355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.337980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.337995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.246 [2024-11-20 00:01:44.338509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.246 [2024-11-20 00:01:44.338522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.338537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.247 [2024-11-20 00:01:44.338551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.338582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.247 [2024-11-20 00:01:44.338597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.338612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.247 [2024-11-20 00:01:44.338626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.338642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.247 [2024-11-20 00:01:44.338656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.338671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.247 [2024-11-20 00:01:44.338684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.338699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.247 [2024-11-20 00:01:44.338712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.338727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.247 [2024-11-20 00:01:44.338741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.338756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.247 [2024-11-20 00:01:44.338770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.338786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.247 [2024-11-20 00:01:44.338800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.338837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:10.247 [2024-11-20 00:01:44.338974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:10.247 [2024-11-20 00:01:44.338997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.339013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:10.247 [2024-11-20 00:01:44.339027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.339040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:10.247 [2024-11-20 00:01:44.339054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.339077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:10.247 [2024-11-20 00:01:44.339093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:10.247 [2024-11-20 00:01:44.339111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a2d70 is same with the state(6) to be set 00:37:10.247 [2024-11-20 00:01:44.340219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:10.247 task offset: 81920 on job bdev=Nvme0n1 fails 00:37:10.247 00:37:10.247 Latency(us) 00:37:10.247 [2024-11-19T23:01:44.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:10.247 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:10.247 Job: Nvme0n1 ended in about 0.40 seconds with error 00:37:10.247 Verification LBA range: start 0x0 length 0x400 00:37:10.247 Nvme0n1 : 0.40 1584.54 99.03 158.45 0.00 35670.03 2900.57 34369.99 00:37:10.247 [2024-11-19T23:01:44.559Z] =================================================================================================================== 00:37:10.247 [2024-11-19T23:01:44.559Z] Total : 1584.54 99.03 158.45 0.00 35670.03 2900.57 34369.99 00:37:10.247 [2024-11-20 00:01:44.342082] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:10.247 [2024-11-20 00:01:44.342110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a2d70 (9): Bad file descriptor 00:37:10.247 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.247 00:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:10.247 [2024-11-20 00:01:44.346228] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 358082 00:37:11.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (358082) - No such process 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:11.178 { 00:37:11.178 "params": { 00:37:11.178 "name": "Nvme$subsystem", 00:37:11.178 "trtype": "$TEST_TRANSPORT", 00:37:11.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.178 "adrfam": "ipv4", 00:37:11.178 "trsvcid": "$NVMF_PORT", 00:37:11.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.178 "hdgst": ${hdgst:-false}, 00:37:11.178 "ddgst": ${ddgst:-false} 00:37:11.178 }, 00:37:11.178 "method": "bdev_nvme_attach_controller" 00:37:11.178 } 00:37:11.178 EOF 00:37:11.178 )") 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:11.178 00:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:11.178 "params": { 00:37:11.178 "name": "Nvme0", 00:37:11.178 "trtype": "tcp", 00:37:11.178 "traddr": "10.0.0.2", 00:37:11.178 "adrfam": "ipv4", 00:37:11.178 "trsvcid": "4420", 00:37:11.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:11.178 "hdgst": false, 00:37:11.178 "ddgst": false 00:37:11.178 }, 00:37:11.178 "method": "bdev_nvme_attach_controller" 00:37:11.178 }' 00:37:11.178 [2024-11-20 00:01:45.393496] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:37:11.178 [2024-11-20 00:01:45.393577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358356 ] 00:37:11.178 [2024-11-20 00:01:45.463013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.436 [2024-11-20 00:01:45.509418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.436 Running I/O for 1 seconds... 00:37:12.808 1664.00 IOPS, 104.00 MiB/s 00:37:12.808 Latency(us) 00:37:12.808 [2024-11-19T23:01:47.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:12.808 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:12.808 Verification LBA range: start 0x0 length 0x400 00:37:12.808 Nvme0n1 : 1.02 1702.36 106.40 0.00 0.00 36981.62 5000.15 33593.27 00:37:12.808 [2024-11-19T23:01:47.120Z] =================================================================================================================== 00:37:12.808 [2024-11-19T23:01:47.120Z] Total : 1702.36 106.40 0.00 0.00 36981.62 5000.15 33593.27 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:12.808 rmmod nvme_tcp 00:37:12.808 rmmod nvme_fabrics 00:37:12.808 rmmod nvme_keyring 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 358038 ']' 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 358038 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 358038 ']' 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 358038 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:12.808 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:12.809 00:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358038 00:37:12.809 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:12.809 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:12.809 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358038' 00:37:12.809 killing process with pid 358038 00:37:12.809 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 358038 00:37:12.809 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 358038 00:37:13.067 [2024-11-20 00:01:47.213112] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:13.067 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:13.067 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:13.067 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:13.067 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:13.067 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:13.067 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:13.067 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:13.067 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:13.067 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:13.067 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:13.067 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:13.067 00:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.601 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:15.601 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:15.601 00:37:15.601 real 0m8.435s 00:37:15.601 user 0m16.950s 00:37:15.601 sys 0m3.565s 00:37:15.601 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:15.601 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:15.601 ************************************ 00:37:15.601 END TEST nvmf_host_management 00:37:15.601 ************************************ 00:37:15.601 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:15.601 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:15.601 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:15.601 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:15.601 ************************************ 00:37:15.601 START TEST nvmf_lvol 00:37:15.601 ************************************ 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:15.602 * Looking for test storage... 00:37:15.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:15.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.602 --rc genhtml_branch_coverage=1 00:37:15.602 --rc genhtml_function_coverage=1 00:37:15.602 --rc genhtml_legend=1 00:37:15.602 --rc geninfo_all_blocks=1 00:37:15.602 --rc geninfo_unexecuted_blocks=1 00:37:15.602 00:37:15.602 ' 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:15.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.602 --rc genhtml_branch_coverage=1 00:37:15.602 --rc genhtml_function_coverage=1 00:37:15.602 --rc genhtml_legend=1 00:37:15.602 --rc geninfo_all_blocks=1 00:37:15.602 --rc geninfo_unexecuted_blocks=1 00:37:15.602 00:37:15.602 ' 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:15.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.602 --rc genhtml_branch_coverage=1 00:37:15.602 --rc genhtml_function_coverage=1 00:37:15.602 --rc genhtml_legend=1 00:37:15.602 --rc geninfo_all_blocks=1 00:37:15.602 --rc geninfo_unexecuted_blocks=1 00:37:15.602 00:37:15.602 ' 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:15.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.602 --rc genhtml_branch_coverage=1 00:37:15.602 --rc genhtml_function_coverage=1 00:37:15.602 --rc genhtml_legend=1 00:37:15.602 --rc geninfo_all_blocks=1 00:37:15.602 --rc geninfo_unexecuted_blocks=1 00:37:15.602 00:37:15.602 ' 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.602 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:15.603 00:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:17.522 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:17.522 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:17.523 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:17.523 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:17.523 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:17.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:17.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:37:17.523 00:37:17.523 --- 10.0.0.2 ping statistics --- 00:37:17.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:17.523 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:17.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:17.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:37:17.523 00:37:17.523 --- 10.0.0.1 ping statistics --- 00:37:17.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:17.523 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=360433 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 360433 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 360433 ']' 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:17.523 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:17.523 [2024-11-20 00:01:51.621635] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:17.523 [2024-11-20 00:01:51.622758] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:37:17.523 [2024-11-20 00:01:51.622814] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:17.523 [2024-11-20 00:01:51.697045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:17.523 [2024-11-20 00:01:51.743738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:17.523 [2024-11-20 00:01:51.743790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:17.523 [2024-11-20 00:01:51.743817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:17.524 [2024-11-20 00:01:51.743828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:17.524 [2024-11-20 00:01:51.743837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:17.524 [2024-11-20 00:01:51.745289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.524 [2024-11-20 00:01:51.745348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:17.524 [2024-11-20 00:01:51.745351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.524 [2024-11-20 00:01:51.829109] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:17.524 [2024-11-20 00:01:51.829285] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:17.524 [2024-11-20 00:01:51.829296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:17.524 [2024-11-20 00:01:51.829564] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:17.781 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:17.781 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:17.781 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:17.781 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:17.781 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:17.781 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:17.781 00:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:18.039 [2024-11-20 00:01:52.134156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:18.039 00:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:18.298 00:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:18.298 00:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:18.556 00:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:18.556 00:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:18.814 00:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:19.072 00:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7a07a550-96e6-4eab-9fd6-46557588139c 00:37:19.072 00:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7a07a550-96e6-4eab-9fd6-46557588139c lvol 20 00:37:19.329 00:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f628149a-6099-48fe-911b-ce24b3e34d3b 00:37:19.329 00:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:19.587 00:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f628149a-6099-48fe-911b-ce24b3e34d3b 00:37:19.845 00:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:20.103 [2024-11-20 00:01:54.394367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.361 00:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:20.617 00:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=360855 00:37:20.617 00:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:20.617 00:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:21.550 00:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f628149a-6099-48fe-911b-ce24b3e34d3b MY_SNAPSHOT 00:37:21.810 00:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2dec4229-4e60-43a9-a7e4-4a4320eee0cc 00:37:21.810 00:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f628149a-6099-48fe-911b-ce24b3e34d3b 30 00:37:22.070 00:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2dec4229-4e60-43a9-a7e4-4a4320eee0cc MY_CLONE 00:37:22.636 00:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6cdffa8d-a5ab-4f41-abf2-e7f4962fdf47 00:37:22.636 00:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6cdffa8d-a5ab-4f41-abf2-e7f4962fdf47 00:37:22.893 00:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 360855 00:37:31.074 Initializing NVMe Controllers 00:37:31.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:31.074 Controller IO queue size 128, less than required. 00:37:31.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:31.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:31.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:31.074 Initialization complete. Launching workers. 00:37:31.074 ======================================================== 00:37:31.074 Latency(us) 00:37:31.074 Device Information : IOPS MiB/s Average min max 00:37:31.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10530.40 41.13 12159.20 1693.55 55376.26 00:37:31.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10539.00 41.17 12152.76 1854.82 58424.12 00:37:31.074 ======================================================== 00:37:31.074 Total : 21069.40 82.30 12155.98 1693.55 58424.12 00:37:31.074 00:37:31.074 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:31.074 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f628149a-6099-48fe-911b-ce24b3e34d3b 00:37:31.332 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7a07a550-96e6-4eab-9fd6-46557588139c 00:37:31.593 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:31.593 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:31.593 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:31.593 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:31.593 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:31.593 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:31.593 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:31.593 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:31.593 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:31.593 rmmod nvme_tcp 00:37:31.593 rmmod nvme_fabrics 00:37:31.852 rmmod nvme_keyring 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 360433 ']' 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 360433 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 360433 ']' 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 360433 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 360433 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 360433' 00:37:31.852 killing process with pid 360433 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 360433 00:37:31.852 00:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 360433 00:37:32.111 00:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:32.111 00:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:32.111 00:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:32.111 00:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:32.111 00:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:32.111 00:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:32.111 00:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:32.111 00:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:32.111 00:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:32.111 00:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:32.111 00:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:32.111 00:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:34.027 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:34.027 00:37:34.027 real 0m18.945s 00:37:34.027 user 0m55.527s 00:37:34.027 sys 0m8.014s 00:37:34.027 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:34.027 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:34.027 ************************************ 00:37:34.027 END TEST nvmf_lvol 00:37:34.027 ************************************ 00:37:34.027 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:34.028 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:34.028 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:34.028 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:34.028 ************************************ 00:37:34.028 START TEST nvmf_lvs_grow 00:37:34.028 ************************************ 00:37:34.028 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:34.293 * Looking for test storage... 00:37:34.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:34.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.293 --rc genhtml_branch_coverage=1 00:37:34.293 --rc genhtml_function_coverage=1 00:37:34.293 --rc genhtml_legend=1 00:37:34.293 --rc geninfo_all_blocks=1 00:37:34.293 --rc geninfo_unexecuted_blocks=1 00:37:34.293 00:37:34.293 ' 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:34.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.293 --rc genhtml_branch_coverage=1 00:37:34.293 --rc genhtml_function_coverage=1 00:37:34.293 --rc genhtml_legend=1 00:37:34.293 --rc geninfo_all_blocks=1 00:37:34.293 --rc geninfo_unexecuted_blocks=1 00:37:34.293 00:37:34.293 ' 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:34.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.293 --rc genhtml_branch_coverage=1 00:37:34.293 --rc genhtml_function_coverage=1 00:37:34.293 --rc genhtml_legend=1 00:37:34.293 --rc geninfo_all_blocks=1 00:37:34.293 --rc geninfo_unexecuted_blocks=1 00:37:34.293 00:37:34.293 ' 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:34.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.293 --rc genhtml_branch_coverage=1 00:37:34.293 --rc genhtml_function_coverage=1 00:37:34.293 --rc genhtml_legend=1 00:37:34.293 --rc geninfo_all_blocks=1 00:37:34.293 --rc geninfo_unexecuted_blocks=1 00:37:34.293 00:37:34.293 ' 00:37:34.293 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:34.294 00:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:36.214 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:36.214 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:36.214 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:36.214 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:36.214 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:36.214 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:36.214 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:36.215 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:36.215 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:36.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:36.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:36.215 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:36.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:36.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:37:36.496 00:37:36.496 --- 10.0.0.2 ping statistics --- 00:37:36.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:36.496 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:36.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:36.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:37:36.496 00:37:36.496 --- 10.0.0.1 ping statistics --- 00:37:36.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:36.496 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=364106 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 364106 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 364106 ']' 00:37:36.496 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:36.497 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:36.497 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:36.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:36.497 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:36.497 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:36.497 [2024-11-20 00:02:10.701923] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:36.497 [2024-11-20 00:02:10.703014] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:37:36.497 [2024-11-20 00:02:10.703094] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:36.497 [2024-11-20 00:02:10.782491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.760 [2024-11-20 00:02:10.830118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:36.760 [2024-11-20 00:02:10.830178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:36.760 [2024-11-20 00:02:10.830212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:36.760 [2024-11-20 00:02:10.830227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:36.760 [2024-11-20 00:02:10.830239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:36.760 [2024-11-20 00:02:10.830879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.760 [2024-11-20 00:02:10.921603] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:36.760 [2024-11-20 00:02:10.921972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:36.760 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:36.760 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:37:36.760 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:36.760 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:36.760 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:36.760 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:36.760 00:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:37.018 [2024-11-20 00:02:11.231552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:37.018 ************************************ 00:37:37.018 START TEST lvs_grow_clean 00:37:37.018 ************************************ 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:37.018 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:37.277 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:37.277 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:37.845 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2d238809-4fc4-48f7-9639-0dcbb933e2cb 00:37:37.845 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d238809-4fc4-48f7-9639-0dcbb933e2cb 00:37:37.845 00:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:38.109 00:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:38.109 00:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:38.109 00:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2d238809-4fc4-48f7-9639-0dcbb933e2cb lvol 150 00:37:38.367 00:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=333b5d70-a8a6-43e0-97fc-afef5585738b 00:37:38.367 00:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:38.367 00:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:38.625 [2024-11-20 00:02:12.747425] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:38.625 [2024-11-20 00:02:12.747531] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:38.625 true 00:37:38.625 00:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d238809-4fc4-48f7-9639-0dcbb933e2cb 00:37:38.625 00:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:38.883 00:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:38.883 00:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:39.142 00:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 333b5d70-a8a6-43e0-97fc-afef5585738b 00:37:39.400 00:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:39.657 [2024-11-20 00:02:13.887825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.657 00:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:39.916 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=364546 00:37:39.916 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:39.916 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:39.916 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 364546 /var/tmp/bdevperf.sock 00:37:39.916 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 364546 ']' 00:37:39.916 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:39.916 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:39.916 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:39.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:39.916 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:39.916 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:40.174 [2024-11-20 00:02:14.231785] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:37:40.174 [2024-11-20 00:02:14.231873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364546 ] 00:37:40.174 [2024-11-20 00:02:14.307445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.174 [2024-11-20 00:02:14.357388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:40.174 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:40.174 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:37:40.174 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:40.740 Nvme0n1 00:37:40.740 00:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:40.997 [ 00:37:40.997 { 00:37:40.997 "name": "Nvme0n1", 00:37:40.997 "aliases": [ 00:37:40.997 "333b5d70-a8a6-43e0-97fc-afef5585738b" 00:37:40.997 ], 00:37:40.997 "product_name": "NVMe disk", 00:37:40.997 "block_size": 4096, 00:37:40.997 "num_blocks": 38912, 00:37:40.997 "uuid": "333b5d70-a8a6-43e0-97fc-afef5585738b", 00:37:40.997 "numa_id": 0, 00:37:40.997 "assigned_rate_limits": { 00:37:40.997 "rw_ios_per_sec": 0, 00:37:40.997 "rw_mbytes_per_sec": 0, 00:37:40.997 "r_mbytes_per_sec": 0, 00:37:40.997 "w_mbytes_per_sec": 0 00:37:40.997 }, 00:37:40.997 "claimed": false, 00:37:40.997 "zoned": false, 00:37:40.997 "supported_io_types": { 00:37:40.997 "read": true, 00:37:40.997 "write": true, 00:37:40.997 "unmap": true, 00:37:40.997 "flush": true, 00:37:40.997 "reset": true, 00:37:40.997 "nvme_admin": true, 00:37:40.997 "nvme_io": true, 00:37:40.997 "nvme_io_md": false, 00:37:40.997 "write_zeroes": true, 00:37:40.997 "zcopy": false, 00:37:40.997 "get_zone_info": false, 00:37:40.997 "zone_management": false, 00:37:40.997 "zone_append": false, 00:37:40.997 "compare": true, 00:37:40.997 "compare_and_write": true, 00:37:40.997 "abort": true, 00:37:40.997 "seek_hole": false, 00:37:40.997 "seek_data": false, 00:37:40.997 "copy": true, 00:37:40.997 "nvme_iov_md": false 00:37:40.997 }, 00:37:40.997 "memory_domains": [ 00:37:40.997 { 00:37:40.997 "dma_device_id": "system", 00:37:40.997 "dma_device_type": 1 00:37:40.997 } 00:37:40.997 ], 00:37:40.997 "driver_specific": { 00:37:40.997 "nvme": [ 00:37:40.997 { 00:37:40.997 "trid": { 00:37:40.997 "trtype": "TCP", 00:37:40.997 "adrfam": "IPv4", 00:37:40.997 "traddr": "10.0.0.2", 00:37:40.997 "trsvcid": "4420", 00:37:40.997 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:40.997 }, 00:37:40.997 "ctrlr_data": { 00:37:40.997 "cntlid": 1, 00:37:40.997 "vendor_id": "0x8086", 00:37:40.997 "model_number": "SPDK bdev Controller", 00:37:40.997 "serial_number": "SPDK0", 00:37:40.997 "firmware_revision": "25.01", 00:37:40.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:40.997 "oacs": { 00:37:40.997 "security": 0, 00:37:40.997 "format": 0, 00:37:40.997 "firmware": 0, 00:37:40.997 "ns_manage": 0 00:37:40.997 }, 00:37:40.997 "multi_ctrlr": true, 00:37:40.997 "ana_reporting": false 00:37:40.997 }, 00:37:40.997 "vs": { 00:37:40.997 "nvme_version": "1.3" 00:37:40.997 }, 00:37:40.997 "ns_data": { 00:37:40.997 "id": 1, 00:37:40.997 "can_share": true 00:37:40.997 } 00:37:40.997 } 00:37:40.997 ], 00:37:40.997 "mp_policy": "active_passive" 00:37:40.997 } 00:37:40.997 } 00:37:40.997 ] 00:37:40.997 00:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=364678 00:37:40.997 00:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:40.997 00:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:41.255 Running I/O for 10 seconds... 00:37:42.188 Latency(us) 00:37:42.188 [2024-11-19T23:02:16.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:42.188 Nvme0n1 : 1.00 13589.00 53.08 0.00 0.00 0.00 0.00 0.00 00:37:42.188 [2024-11-19T23:02:16.500Z] =================================================================================================================== 00:37:42.188 [2024-11-19T23:02:16.500Z] Total : 13589.00 53.08 0.00 0.00 0.00 0.00 0.00 00:37:42.188 00:37:43.128 00:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2d238809-4fc4-48f7-9639-0dcbb933e2cb 00:37:43.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:43.128 Nvme0n1 : 2.00 13779.50 53.83 0.00 0.00 0.00 0.00 0.00 00:37:43.128 [2024-11-19T23:02:17.440Z] =================================================================================================================== 00:37:43.128 [2024-11-19T23:02:17.440Z] Total : 13779.50 53.83 0.00 0.00 0.00 0.00 0.00 00:37:43.128 00:37:43.385 true 00:37:43.385 00:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d238809-4fc4-48f7-9639-0dcbb933e2cb 00:37:43.385 00:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:43.642 00:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:43.642 00:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:43.642 00:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 364678 00:37:44.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:44.208 Nvme0n1 : 3.00 13927.67 54.40 0.00 0.00 0.00 0.00 0.00 00:37:44.208 [2024-11-19T23:02:18.520Z] =================================================================================================================== 00:37:44.208 [2024-11-19T23:02:18.520Z] Total : 13927.67 54.40 0.00 0.00 0.00 0.00 0.00 00:37:44.208 00:37:45.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:45.141 Nvme0n1 : 4.00 14001.75 54.69 0.00 0.00 0.00 0.00 0.00 00:37:45.141 [2024-11-19T23:02:19.453Z] =================================================================================================================== 00:37:45.141 [2024-11-19T23:02:19.453Z] Total : 14001.75 54.69 0.00 0.00 0.00 0.00 0.00 00:37:45.141 00:37:46.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:46.075 Nvme0n1 : 5.00 14046.20 54.87 0.00 0.00 0.00 0.00 0.00 00:37:46.075 [2024-11-19T23:02:20.387Z] =================================================================================================================== 00:37:46.075 [2024-11-19T23:02:20.387Z] Total : 14046.20 54.87 0.00 0.00 0.00 0.00 0.00 00:37:46.075 00:37:47.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.447 Nvme0n1 : 6.00 14097.00 55.07 0.00 0.00 0.00 0.00 0.00 00:37:47.447 [2024-11-19T23:02:21.759Z] =================================================================================================================== 00:37:47.447 [2024-11-19T23:02:21.759Z] Total : 14097.00 55.07 0.00 0.00 0.00 0.00 0.00 00:37:47.447 00:37:48.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:48.382 Nvme0n1 : 7.00 14133.29 55.21 0.00 0.00 0.00 0.00 0.00 00:37:48.382 [2024-11-19T23:02:22.694Z] =================================================================================================================== 00:37:48.382 [2024-11-19T23:02:22.694Z] Total : 14133.29 55.21 0.00 0.00 0.00 0.00 0.00 00:37:48.382 00:37:49.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:49.315 Nvme0n1 : 8.00 14168.50 55.35 0.00 0.00 0.00 0.00 0.00 00:37:49.315 [2024-11-19T23:02:23.627Z] =================================================================================================================== 00:37:49.315 [2024-11-19T23:02:23.627Z] Total : 14168.50 55.35 0.00 0.00 0.00 0.00 0.00 00:37:49.315 00:37:50.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:50.304 Nvme0n1 : 9.00 14202.89 55.48 0.00 0.00 0.00 0.00 0.00 00:37:50.304 [2024-11-19T23:02:24.616Z] =================================================================================================================== 00:37:50.304 [2024-11-19T23:02:24.616Z] Total : 14202.89 55.48 0.00 0.00 0.00 0.00 0.00 00:37:50.304 00:37:51.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.260 Nvme0n1 : 10.00 14217.70 55.54 0.00 0.00 0.00 0.00 0.00 00:37:51.260 [2024-11-19T23:02:25.572Z] =================================================================================================================== 00:37:51.260 [2024-11-19T23:02:25.572Z] Total : 14217.70 55.54 0.00 0.00 0.00 0.00 0.00 00:37:51.260 00:37:51.260 00:37:51.260 Latency(us) 00:37:51.260 [2024-11-19T23:02:25.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:51.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.260 Nvme0n1 : 10.01 14222.11 55.56 0.00 0.00 8995.06 5509.88 22622.06 00:37:51.260 [2024-11-19T23:02:25.572Z] =================================================================================================================== 00:37:51.260 [2024-11-19T23:02:25.572Z] Total : 14222.11 55.56 0.00 0.00 8995.06 5509.88 22622.06 00:37:51.260 { 00:37:51.260 "results": [ 00:37:51.260 { 00:37:51.260 "job": "Nvme0n1", 00:37:51.260 "core_mask": "0x2", 00:37:51.260 "workload": "randwrite", 00:37:51.260 "status": "finished", 00:37:51.260 "queue_depth": 128, 00:37:51.260 "io_size": 4096, 00:37:51.260 "runtime": 10.005901, 00:37:51.260 "iops": 14222.107534343984, 00:37:51.260 "mibps": 55.55510755603119, 00:37:51.260 "io_failed": 0, 00:37:51.260 "io_timeout": 0, 00:37:51.260 "avg_latency_us": 8995.06487914456, 00:37:51.260 "min_latency_us": 5509.878518518519, 00:37:51.260 "max_latency_us": 22622.056296296298 00:37:51.260 } 00:37:51.260 ], 00:37:51.260 "core_count": 1 00:37:51.260 } 00:37:51.260 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 364546 00:37:51.260 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 364546 ']' 00:37:51.260 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 364546 00:37:51.260 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:37:51.260 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:51.260 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 364546 00:37:51.260 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:51.260 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:51.260 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 364546' 00:37:51.260 killing process with pid 364546 00:37:51.260 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 364546 00:37:51.260 Received shutdown signal, test time was about 10.000000 seconds 00:37:51.260 00:37:51.260 Latency(us) 00:37:51.260 [2024-11-19T23:02:25.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:51.260 [2024-11-19T23:02:25.572Z] =================================================================================================================== 00:37:51.260 [2024-11-19T23:02:25.572Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:51.260 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 364546 00:37:51.519 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:51.776 00:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:52.035 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d238809-4fc4-48f7-9639-0dcbb933e2cb 00:37:52.035 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:52.297 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:52.297 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:52.297 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:52.557 [2024-11-20 00:02:26.691446] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:52.557 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d238809-4fc4-48f7-9639-0dcbb933e2cb 00:37:52.557 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:37:52.557 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d238809-4fc4-48f7-9639-0dcbb933e2cb 00:37:52.557 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:52.557 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:52.557 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:52.557 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:52.557 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:52.557 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:52.557 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:52.557 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:52.557 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d238809-4fc4-48f7-9639-0dcbb933e2cb 00:37:52.816 request: 00:37:52.816 { 00:37:52.816 "uuid": "2d238809-4fc4-48f7-9639-0dcbb933e2cb", 00:37:52.816 "method": "bdev_lvol_get_lvstores", 00:37:52.816 "req_id": 1 00:37:52.816 } 00:37:52.816 Got JSON-RPC error response 00:37:52.816 response: 00:37:52.816 { 00:37:52.816 "code": -19, 00:37:52.816 "message": "No such device" 00:37:52.816 } 00:37:52.816 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:37:52.816 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:52.816 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:52.816 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:52.816 00:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:53.075 aio_bdev 00:37:53.075 00:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 333b5d70-a8a6-43e0-97fc-afef5585738b 00:37:53.075 00:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=333b5d70-a8a6-43e0-97fc-afef5585738b 00:37:53.075 00:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:53.075 00:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:37:53.075 00:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:53.075 00:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:53.075 00:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:53.333 00:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 333b5d70-a8a6-43e0-97fc-afef5585738b -t 2000 00:37:53.591 [ 00:37:53.591 { 00:37:53.591 "name": "333b5d70-a8a6-43e0-97fc-afef5585738b", 00:37:53.591 "aliases": [ 00:37:53.591 "lvs/lvol" 00:37:53.591 ], 00:37:53.591 "product_name": "Logical Volume", 00:37:53.591 "block_size": 4096, 00:37:53.591 "num_blocks": 38912, 00:37:53.591 "uuid": "333b5d70-a8a6-43e0-97fc-afef5585738b", 00:37:53.591 "assigned_rate_limits": { 00:37:53.591 "rw_ios_per_sec": 0, 00:37:53.591 "rw_mbytes_per_sec": 0, 00:37:53.591 "r_mbytes_per_sec": 0, 00:37:53.591 "w_mbytes_per_sec": 0 00:37:53.591 }, 00:37:53.591 "claimed": false, 00:37:53.591 "zoned": false, 00:37:53.591 "supported_io_types": { 00:37:53.591 "read": true, 00:37:53.591 "write": true, 00:37:53.591 "unmap": true, 00:37:53.591 "flush": false, 00:37:53.591 "reset": true, 00:37:53.591 "nvme_admin": false, 00:37:53.591 "nvme_io": false, 00:37:53.591 "nvme_io_md": false, 00:37:53.591 "write_zeroes": true, 00:37:53.591 "zcopy": false, 00:37:53.591 "get_zone_info": false, 00:37:53.591 "zone_management": false, 00:37:53.591 "zone_append": false, 00:37:53.591 "compare": false, 00:37:53.591 "compare_and_write": false, 00:37:53.591 "abort": false, 00:37:53.591 "seek_hole": true, 00:37:53.591 "seek_data": true, 00:37:53.591 "copy": false, 00:37:53.591 "nvme_iov_md": false 00:37:53.591 }, 00:37:53.591 "driver_specific": { 00:37:53.591 "lvol": { 00:37:53.591 "lvol_store_uuid": "2d238809-4fc4-48f7-9639-0dcbb933e2cb", 00:37:53.591 "base_bdev": "aio_bdev", 00:37:53.591 "thin_provision": false, 00:37:53.591 "num_allocated_clusters": 38, 00:37:53.591 "snapshot": false, 00:37:53.591 "clone": false, 00:37:53.592 "esnap_clone": false 00:37:53.592 } 00:37:53.592 } 00:37:53.592 } 00:37:53.592 ] 00:37:53.592 00:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:37:53.592 00:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d238809-4fc4-48f7-9639-0dcbb933e2cb 00:37:53.592 00:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:53.849 00:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:53.849 00:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:53.849 00:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2d238809-4fc4-48f7-9639-0dcbb933e2cb 00:37:54.108 00:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:54.108 00:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 333b5d70-a8a6-43e0-97fc-afef5585738b 00:37:54.366 00:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2d238809-4fc4-48f7-9639-0dcbb933e2cb 00:37:54.624 00:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.192 00:37:55.192 real 0m17.942s 00:37:55.192 user 0m17.431s 00:37:55.192 sys 0m1.868s 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:55.192 ************************************ 00:37:55.192 END TEST lvs_grow_clean 00:37:55.192 ************************************ 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.192 ************************************ 00:37:55.192 START TEST lvs_grow_dirty 00:37:55.192 ************************************ 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.192 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:55.451 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:55.451 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:55.710 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:37:55.710 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:37:55.710 00:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:55.968 00:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:55.968 00:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:55.968 00:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d lvol 150 00:37:56.227 00:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=16ef3b26-1f0b-4fa9-a449-ce273b948357 00:37:56.227 00:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:56.227 00:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:56.485 [2024-11-20 00:02:30.683406] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:56.485 [2024-11-20 00:02:30.683522] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:56.486 true 00:37:56.486 00:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:37:56.486 00:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:56.744 00:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:56.744 00:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:57.001 00:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 16ef3b26-1f0b-4fa9-a449-ce273b948357 00:37:57.259 00:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:57.517 [2024-11-20 00:02:31.779743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:57.517 00:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:57.774 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=366694 00:37:57.774 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:57.774 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:57.775 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 366694 /var/tmp/bdevperf.sock 00:37:57.775 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 366694 ']' 00:37:57.775 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:57.775 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:57.775 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:57.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:57.775 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:57.775 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:58.033 [2024-11-20 00:02:32.106312] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:37:58.033 [2024-11-20 00:02:32.106409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366694 ] 00:37:58.033 [2024-11-20 00:02:32.174334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.033 [2024-11-20 00:02:32.222188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.291 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:58.291 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:58.291 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:58.548 Nvme0n1 00:37:58.548 00:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:58.806 [ 00:37:58.806 { 00:37:58.806 "name": "Nvme0n1", 00:37:58.806 "aliases": [ 00:37:58.806 "16ef3b26-1f0b-4fa9-a449-ce273b948357" 00:37:58.806 ], 00:37:58.806 "product_name": "NVMe disk", 00:37:58.806 "block_size": 4096, 00:37:58.806 "num_blocks": 38912, 00:37:58.806 "uuid": "16ef3b26-1f0b-4fa9-a449-ce273b948357", 00:37:58.806 "numa_id": 0, 00:37:58.806 "assigned_rate_limits": { 00:37:58.806 "rw_ios_per_sec": 0, 00:37:58.806 "rw_mbytes_per_sec": 0, 00:37:58.806 "r_mbytes_per_sec": 0, 00:37:58.806 "w_mbytes_per_sec": 0 00:37:58.806 }, 00:37:58.806 "claimed": false, 00:37:58.806 "zoned": false, 00:37:58.806 "supported_io_types": { 00:37:58.806 "read": true, 00:37:58.806 "write": true, 00:37:58.806 "unmap": true, 00:37:58.806 "flush": true, 00:37:58.806 "reset": true, 00:37:58.806 "nvme_admin": true, 00:37:58.806 "nvme_io": true, 00:37:58.806 "nvme_io_md": false, 00:37:58.806 "write_zeroes": true, 00:37:58.806 "zcopy": false, 00:37:58.806 "get_zone_info": false, 00:37:58.806 "zone_management": false, 00:37:58.806 "zone_append": false, 00:37:58.806 "compare": true, 00:37:58.806 "compare_and_write": true, 00:37:58.806 "abort": true, 00:37:58.806 "seek_hole": false, 00:37:58.806 "seek_data": false, 00:37:58.806 "copy": true, 00:37:58.806 "nvme_iov_md": false 00:37:58.806 }, 00:37:58.806 "memory_domains": [ 00:37:58.806 { 00:37:58.806 "dma_device_id": "system", 00:37:58.806 "dma_device_type": 1 00:37:58.806 } 00:37:58.806 ], 00:37:58.806 "driver_specific": { 00:37:58.806 "nvme": [ 00:37:58.806 { 00:37:58.806 "trid": { 00:37:58.806 "trtype": "TCP", 00:37:58.806 "adrfam": "IPv4", 00:37:58.806 "traddr": "10.0.0.2", 00:37:58.806 "trsvcid": "4420", 00:37:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:58.806 }, 00:37:58.806 "ctrlr_data": { 00:37:58.806 "cntlid": 1, 00:37:58.806 "vendor_id": "0x8086", 00:37:58.806 "model_number": "SPDK bdev Controller", 00:37:58.806 "serial_number": "SPDK0", 00:37:58.807 "firmware_revision": "25.01", 00:37:58.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:58.807 "oacs": { 00:37:58.807 "security": 0, 00:37:58.807 "format": 0, 00:37:58.807 "firmware": 0, 00:37:58.807 "ns_manage": 0 00:37:58.807 }, 00:37:58.807 "multi_ctrlr": true, 00:37:58.807 "ana_reporting": false 00:37:58.807 }, 00:37:58.807 "vs": { 00:37:58.807 "nvme_version": "1.3" 00:37:58.807 }, 00:37:58.807 "ns_data": { 00:37:58.807 "id": 1, 00:37:58.807 "can_share": true 00:37:58.807 } 00:37:58.807 } 00:37:58.807 ], 00:37:58.807 "mp_policy": "active_passive" 00:37:58.807 } 00:37:58.807 } 00:37:58.807 ] 00:37:58.807 00:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=366826 00:37:58.807 00:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:58.807 00:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:59.065 Running I/O for 10 seconds... 00:37:59.997 Latency(us) 00:37:59.997 [2024-11-19T23:02:34.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:59.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:59.997 Nvme0n1 : 1.00 13716.00 53.58 0.00 0.00 0.00 0.00 0.00 00:37:59.997 [2024-11-19T23:02:34.309Z] =================================================================================================================== 00:37:59.997 [2024-11-19T23:02:34.309Z] Total : 13716.00 53.58 0.00 0.00 0.00 0.00 0.00 00:37:59.997 00:38:00.932 00:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:38:00.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:00.932 Nvme0n1 : 2.00 13843.00 54.07 0.00 0.00 0.00 0.00 0.00 00:38:00.932 [2024-11-19T23:02:35.244Z] =================================================================================================================== 00:38:00.932 [2024-11-19T23:02:35.244Z] Total : 13843.00 54.07 0.00 0.00 0.00 0.00 0.00 00:38:00.932 00:38:01.190 true 00:38:01.190 00:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:38:01.190 00:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:01.448 00:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:01.448 00:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:01.448 00:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 366826 00:38:02.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:02.012 Nvme0n1 : 3.00 13970.00 54.57 0.00 0.00 0.00 0.00 0.00 00:38:02.012 [2024-11-19T23:02:36.324Z] =================================================================================================================== 00:38:02.012 [2024-11-19T23:02:36.324Z] Total : 13970.00 54.57 0.00 0.00 0.00 0.00 0.00 00:38:02.012 00:38:02.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:02.943 Nvme0n1 : 4.00 14033.50 54.82 0.00 0.00 0.00 0.00 0.00 00:38:02.943 [2024-11-19T23:02:37.255Z] =================================================================================================================== 00:38:02.943 [2024-11-19T23:02:37.255Z] Total : 14033.50 54.82 0.00 0.00 0.00 0.00 0.00 00:38:02.943 00:38:04.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:04.316 Nvme0n1 : 5.00 14084.40 55.02 0.00 0.00 0.00 0.00 0.00 00:38:04.316 [2024-11-19T23:02:38.629Z] =================================================================================================================== 00:38:04.317 [2024-11-19T23:02:38.629Z] Total : 14084.40 55.02 0.00 0.00 0.00 0.00 0.00 00:38:04.317 00:38:05.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.250 Nvme0n1 : 6.00 14123.83 55.17 0.00 0.00 0.00 0.00 0.00 00:38:05.250 [2024-11-19T23:02:39.562Z] =================================================================================================================== 00:38:05.250 [2024-11-19T23:02:39.562Z] Total : 14123.83 55.17 0.00 0.00 0.00 0.00 0.00 00:38:05.250 00:38:06.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:06.182 Nvme0n1 : 7.00 14210.71 55.51 0.00 0.00 0.00 0.00 0.00 00:38:06.182 [2024-11-19T23:02:40.494Z] =================================================================================================================== 00:38:06.182 [2024-11-19T23:02:40.494Z] Total : 14210.71 55.51 0.00 0.00 0.00 0.00 0.00 00:38:06.182 00:38:07.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:07.117 Nvme0n1 : 8.00 14339.38 56.01 0.00 0.00 0.00 0.00 0.00 00:38:07.117 [2024-11-19T23:02:41.429Z] =================================================================================================================== 00:38:07.117 [2024-11-19T23:02:41.429Z] Total : 14339.38 56.01 0.00 0.00 0.00 0.00 0.00 00:38:07.117 00:38:08.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:08.050 Nvme0n1 : 9.00 14340.67 56.02 0.00 0.00 0.00 0.00 0.00 00:38:08.050 [2024-11-19T23:02:42.362Z] =================================================================================================================== 00:38:08.050 [2024-11-19T23:02:42.362Z] Total : 14340.67 56.02 0.00 0.00 0.00 0.00 0.00 00:38:08.050 00:38:08.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:08.989 Nvme0n1 : 10.00 14354.40 56.07 0.00 0.00 0.00 0.00 0.00 00:38:08.989 [2024-11-19T23:02:43.301Z] =================================================================================================================== 00:38:08.989 [2024-11-19T23:02:43.301Z] Total : 14354.40 56.07 0.00 0.00 0.00 0.00 0.00 00:38:08.989 00:38:08.989 00:38:08.989 Latency(us) 00:38:08.989 [2024-11-19T23:02:43.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:08.989 Nvme0n1 : 10.01 14356.68 56.08 0.00 0.00 8911.03 4830.25 19709.35 00:38:08.989 [2024-11-19T23:02:43.301Z] =================================================================================================================== 00:38:08.989 [2024-11-19T23:02:43.301Z] Total : 14356.68 56.08 0.00 0.00 8911.03 4830.25 19709.35 00:38:08.989 { 00:38:08.989 "results": [ 00:38:08.989 { 00:38:08.989 "job": "Nvme0n1", 00:38:08.989 "core_mask": "0x2", 00:38:08.989 "workload": "randwrite", 00:38:08.989 "status": "finished", 00:38:08.989 "queue_depth": 128, 00:38:08.989 "io_size": 4096, 00:38:08.989 "runtime": 10.007329, 00:38:08.989 "iops": 14356.677990700615, 00:38:08.989 "mibps": 56.08077340117428, 00:38:08.989 "io_failed": 0, 00:38:08.989 "io_timeout": 0, 00:38:08.989 "avg_latency_us": 8911.027828737473, 00:38:08.989 "min_latency_us": 4830.245925925926, 00:38:08.989 "max_latency_us": 19709.345185185186 00:38:08.989 } 00:38:08.989 ], 00:38:08.989 "core_count": 1 00:38:08.989 } 00:38:08.989 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 366694 00:38:08.989 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 366694 ']' 00:38:08.989 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 366694 00:38:08.989 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:08.989 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:08.989 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 366694 00:38:08.989 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:08.989 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:08.989 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 366694' 00:38:08.989 killing process with pid 366694 00:38:08.989 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 366694 00:38:08.989 Received shutdown signal, test time was about 10.000000 seconds 00:38:08.989 00:38:08.989 Latency(us) 00:38:08.989 [2024-11-19T23:02:43.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.989 [2024-11-19T23:02:43.301Z] =================================================================================================================== 00:38:08.989 [2024-11-19T23:02:43.301Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:08.989 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 366694 00:38:09.248 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:09.506 00:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:09.764 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:38:09.764 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:10.022 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:10.022 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:10.022 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 364106 00:38:10.022 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 364106 00:38:10.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 364106 Killed "${NVMF_APP[@]}" "$@" 00:38:10.022 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:10.022 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:10.022 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:10.022 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:10.022 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:10.280 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=368143 00:38:10.280 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:10.280 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 368143 00:38:10.280 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 368143 ']' 00:38:10.280 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:10.280 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:10.280 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:10.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:10.280 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:10.280 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:10.280 [2024-11-20 00:02:44.389342] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:10.280 [2024-11-20 00:02:44.390533] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:38:10.280 [2024-11-20 00:02:44.390611] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:10.280 [2024-11-20 00:02:44.472370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.280 [2024-11-20 00:02:44.519520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:10.280 [2024-11-20 00:02:44.519582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:10.280 [2024-11-20 00:02:44.519609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:10.280 [2024-11-20 00:02:44.519623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:10.280 [2024-11-20 00:02:44.519635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:10.280 [2024-11-20 00:02:44.520276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.538 [2024-11-20 00:02:44.609598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:10.538 [2024-11-20 00:02:44.609952] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:10.538 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:10.538 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:10.538 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:10.538 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:10.538 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:10.538 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:10.538 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:10.796 [2024-11-20 00:02:44.919189] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:10.796 [2024-11-20 00:02:44.919358] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:10.796 [2024-11-20 00:02:44.919417] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:10.796 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:10.796 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 16ef3b26-1f0b-4fa9-a449-ce273b948357 00:38:10.796 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=16ef3b26-1f0b-4fa9-a449-ce273b948357 00:38:10.796 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:10.796 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:10.796 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:10.796 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:10.796 00:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:11.054 00:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 16ef3b26-1f0b-4fa9-a449-ce273b948357 -t 2000 00:38:11.312 [ 00:38:11.312 { 00:38:11.312 "name": "16ef3b26-1f0b-4fa9-a449-ce273b948357", 00:38:11.312 "aliases": [ 00:38:11.312 "lvs/lvol" 00:38:11.312 ], 00:38:11.312 "product_name": "Logical Volume", 00:38:11.312 "block_size": 4096, 00:38:11.312 "num_blocks": 38912, 00:38:11.312 "uuid": "16ef3b26-1f0b-4fa9-a449-ce273b948357", 00:38:11.312 "assigned_rate_limits": { 00:38:11.312 "rw_ios_per_sec": 0, 00:38:11.312 "rw_mbytes_per_sec": 0, 00:38:11.312 "r_mbytes_per_sec": 0, 00:38:11.312 "w_mbytes_per_sec": 0 00:38:11.312 }, 00:38:11.312 "claimed": false, 00:38:11.312 "zoned": false, 00:38:11.312 "supported_io_types": { 00:38:11.312 "read": true, 00:38:11.312 "write": true, 00:38:11.312 "unmap": true, 00:38:11.312 "flush": false, 00:38:11.312 "reset": true, 00:38:11.312 "nvme_admin": false, 00:38:11.312 "nvme_io": false, 00:38:11.312 "nvme_io_md": false, 00:38:11.312 "write_zeroes": true, 00:38:11.312 "zcopy": false, 00:38:11.312 "get_zone_info": false, 00:38:11.312 "zone_management": false, 00:38:11.312 "zone_append": false, 00:38:11.312 "compare": false, 00:38:11.312 "compare_and_write": false, 00:38:11.312 "abort": false, 00:38:11.312 "seek_hole": true, 00:38:11.312 "seek_data": true, 00:38:11.312 "copy": false, 00:38:11.312 "nvme_iov_md": false 00:38:11.312 }, 00:38:11.312 "driver_specific": { 00:38:11.312 "lvol": { 00:38:11.312 "lvol_store_uuid": "2f6b4566-fd0d-40ff-9131-a445ed06e14d", 00:38:11.312 "base_bdev": "aio_bdev", 00:38:11.312 "thin_provision": false, 00:38:11.312 "num_allocated_clusters": 38, 00:38:11.312 "snapshot": false, 00:38:11.312 "clone": false, 00:38:11.312 "esnap_clone": false 00:38:11.312 } 00:38:11.312 } 00:38:11.312 } 00:38:11.312 ] 00:38:11.312 00:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:11.312 00:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:38:11.312 00:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:11.571 00:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:11.571 00:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:38:11.571 00:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:11.829 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:11.829 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:12.087 [2024-11-20 00:02:46.300882] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:12.087 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:38:12.087 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:12.087 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:38:12.088 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:12.088 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.088 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:12.088 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.088 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:12.088 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.088 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:12.088 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:12.088 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:38:12.356 request: 00:38:12.356 { 00:38:12.356 "uuid": "2f6b4566-fd0d-40ff-9131-a445ed06e14d", 00:38:12.356 "method": "bdev_lvol_get_lvstores", 00:38:12.356 "req_id": 1 00:38:12.356 } 00:38:12.356 Got JSON-RPC error response 00:38:12.356 response: 00:38:12.356 { 00:38:12.356 "code": -19, 00:38:12.356 "message": "No such device" 00:38:12.356 } 00:38:12.356 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:12.356 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:12.356 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:12.356 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:12.356 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:12.612 aio_bdev 00:38:12.612 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 16ef3b26-1f0b-4fa9-a449-ce273b948357 00:38:12.612 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=16ef3b26-1f0b-4fa9-a449-ce273b948357 00:38:12.612 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:12.612 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:12.612 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:12.612 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:12.612 00:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:13.178 00:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 16ef3b26-1f0b-4fa9-a449-ce273b948357 -t 2000 00:38:13.178 [ 00:38:13.178 { 00:38:13.178 "name": "16ef3b26-1f0b-4fa9-a449-ce273b948357", 00:38:13.178 "aliases": [ 00:38:13.178 "lvs/lvol" 00:38:13.178 ], 00:38:13.178 "product_name": "Logical Volume", 00:38:13.178 "block_size": 4096, 00:38:13.178 "num_blocks": 38912, 00:38:13.178 "uuid": "16ef3b26-1f0b-4fa9-a449-ce273b948357", 00:38:13.178 "assigned_rate_limits": { 00:38:13.178 "rw_ios_per_sec": 0, 00:38:13.178 "rw_mbytes_per_sec": 0, 00:38:13.178 "r_mbytes_per_sec": 0, 00:38:13.178 "w_mbytes_per_sec": 0 00:38:13.178 }, 00:38:13.178 "claimed": false, 00:38:13.178 "zoned": false, 00:38:13.178 "supported_io_types": { 00:38:13.178 "read": true, 00:38:13.178 "write": true, 00:38:13.178 "unmap": true, 00:38:13.178 "flush": false, 00:38:13.178 "reset": true, 00:38:13.178 "nvme_admin": false, 00:38:13.178 "nvme_io": false, 00:38:13.178 "nvme_io_md": false, 00:38:13.178 "write_zeroes": true, 00:38:13.178 "zcopy": false, 00:38:13.178 "get_zone_info": false, 00:38:13.178 "zone_management": false, 00:38:13.178 "zone_append": false, 00:38:13.178 "compare": false, 00:38:13.178 "compare_and_write": false, 00:38:13.178 "abort": false, 00:38:13.178 "seek_hole": true, 00:38:13.178 "seek_data": true, 00:38:13.178 "copy": false, 00:38:13.178 "nvme_iov_md": false 00:38:13.178 }, 00:38:13.178 "driver_specific": { 00:38:13.178 "lvol": { 00:38:13.178 "lvol_store_uuid": "2f6b4566-fd0d-40ff-9131-a445ed06e14d", 00:38:13.178 "base_bdev": "aio_bdev", 00:38:13.178 "thin_provision": false, 00:38:13.179 "num_allocated_clusters": 38, 00:38:13.179 "snapshot": false, 00:38:13.179 "clone": false, 00:38:13.179 "esnap_clone": false 00:38:13.179 } 00:38:13.179 } 00:38:13.179 } 00:38:13.179 ] 00:38:13.179 00:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:13.179 00:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:38:13.179 00:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:13.759 00:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:13.760 00:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:38:13.760 00:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:13.760 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:13.760 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 16ef3b26-1f0b-4fa9-a449-ce273b948357 00:38:14.016 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2f6b4566-fd0d-40ff-9131-a445ed06e14d 00:38:14.274 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:14.840 00:38:14.840 real 0m19.606s 00:38:14.840 user 0m36.470s 00:38:14.840 sys 0m4.818s 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:14.840 ************************************ 00:38:14.840 END TEST lvs_grow_dirty 00:38:14.840 ************************************ 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:14.840 nvmf_trace.0 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:14.840 rmmod nvme_tcp 00:38:14.840 rmmod nvme_fabrics 00:38:14.840 rmmod nvme_keyring 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:14.840 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:14.841 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 368143 ']' 00:38:14.841 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 368143 00:38:14.841 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 368143 ']' 00:38:14.841 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 368143 00:38:14.841 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:14.841 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:14.841 00:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 368143 00:38:14.841 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:14.841 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:14.841 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 368143' 00:38:14.841 killing process with pid 368143 00:38:14.841 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 368143 00:38:14.841 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 368143 00:38:15.097 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:15.097 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:15.097 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:15.097 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:15.097 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:15.097 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:15.097 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:15.097 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:15.097 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:15.097 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:15.097 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:15.097 00:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:17.000 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:17.000 00:38:17.000 real 0m42.927s 00:38:17.000 user 0m55.629s 00:38:17.000 sys 0m8.609s 00:38:17.000 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:17.000 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:17.000 ************************************ 00:38:17.000 END TEST nvmf_lvs_grow 00:38:17.000 ************************************ 00:38:17.000 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:17.000 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:17.000 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:17.000 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:17.000 ************************************ 00:38:17.000 START TEST nvmf_bdev_io_wait 00:38:17.000 ************************************ 00:38:17.000 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:17.258 * Looking for test storage... 00:38:17.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:17.258 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:17.258 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:38:17.258 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:17.258 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:17.258 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:17.258 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:17.258 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:17.258 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:17.258 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:17.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.259 --rc genhtml_branch_coverage=1 00:38:17.259 --rc genhtml_function_coverage=1 00:38:17.259 --rc genhtml_legend=1 00:38:17.259 --rc geninfo_all_blocks=1 00:38:17.259 --rc geninfo_unexecuted_blocks=1 00:38:17.259 00:38:17.259 ' 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:17.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.259 --rc genhtml_branch_coverage=1 00:38:17.259 --rc genhtml_function_coverage=1 00:38:17.259 --rc genhtml_legend=1 00:38:17.259 --rc geninfo_all_blocks=1 00:38:17.259 --rc geninfo_unexecuted_blocks=1 00:38:17.259 00:38:17.259 ' 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:17.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.259 --rc genhtml_branch_coverage=1 00:38:17.259 --rc genhtml_function_coverage=1 00:38:17.259 --rc genhtml_legend=1 00:38:17.259 --rc geninfo_all_blocks=1 00:38:17.259 --rc geninfo_unexecuted_blocks=1 00:38:17.259 00:38:17.259 ' 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:17.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.259 --rc genhtml_branch_coverage=1 00:38:17.259 --rc genhtml_function_coverage=1 00:38:17.259 --rc genhtml_legend=1 00:38:17.259 --rc geninfo_all_blocks=1 00:38:17.259 --rc geninfo_unexecuted_blocks=1 00:38:17.259 00:38:17.259 ' 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:17.259 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:17.260 00:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:19.792 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:19.792 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:19.792 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:19.792 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:19.792 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:19.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:19.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:38:19.793 00:38:19.793 --- 10.0.0.2 ping statistics --- 00:38:19.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:19.793 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:19.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:19.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:38:19.793 00:38:19.793 --- 10.0.0.1 ping statistics --- 00:38:19.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:19.793 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=370684 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 370684 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 370684 ']' 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:19.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:19.793 [2024-11-20 00:02:53.745412] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:19.793 [2024-11-20 00:02:53.746477] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:38:19.793 [2024-11-20 00:02:53.746527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:19.793 [2024-11-20 00:02:53.822588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:19.793 [2024-11-20 00:02:53.872795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:19.793 [2024-11-20 00:02:53.872857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:19.793 [2024-11-20 00:02:53.872883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:19.793 [2024-11-20 00:02:53.872896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:19.793 [2024-11-20 00:02:53.872908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:19.793 [2024-11-20 00:02:53.874622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.793 [2024-11-20 00:02:53.874690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:19.793 [2024-11-20 00:02:53.874781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:19.793 [2024-11-20 00:02:53.874784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.793 [2024-11-20 00:02:53.875243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.793 00:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:19.793 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.793 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:19.793 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.793 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:19.793 [2024-11-20 00:02:54.065809] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:19.793 [2024-11-20 00:02:54.065999] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:19.793 [2024-11-20 00:02:54.066895] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:19.793 [2024-11-20 00:02:54.067690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:19.793 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.793 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:19.793 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.793 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:19.793 [2024-11-20 00:02:54.075442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:19.793 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.793 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:19.793 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.793 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.052 Malloc0 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:20.052 [2024-11-20 00:02:54.127631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=370711 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=370713 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:20.052 { 00:38:20.052 "params": { 00:38:20.052 "name": "Nvme$subsystem", 00:38:20.052 "trtype": "$TEST_TRANSPORT", 00:38:20.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:20.052 "adrfam": "ipv4", 00:38:20.052 "trsvcid": "$NVMF_PORT", 00:38:20.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:20.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:20.052 "hdgst": ${hdgst:-false}, 00:38:20.052 "ddgst": ${ddgst:-false} 00:38:20.052 }, 00:38:20.052 "method": "bdev_nvme_attach_controller" 00:38:20.052 } 00:38:20.052 EOF 00:38:20.052 )") 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=370715 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:20.052 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:20.052 { 00:38:20.052 "params": { 00:38:20.052 "name": "Nvme$subsystem", 00:38:20.052 "trtype": "$TEST_TRANSPORT", 00:38:20.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:20.052 "adrfam": "ipv4", 00:38:20.052 "trsvcid": "$NVMF_PORT", 00:38:20.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:20.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:20.052 "hdgst": ${hdgst:-false}, 00:38:20.052 "ddgst": ${ddgst:-false} 00:38:20.052 }, 00:38:20.052 "method": "bdev_nvme_attach_controller" 00:38:20.052 } 00:38:20.052 EOF 00:38:20.052 )") 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=370718 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:20.053 { 00:38:20.053 "params": { 00:38:20.053 "name": "Nvme$subsystem", 00:38:20.053 "trtype": "$TEST_TRANSPORT", 00:38:20.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:20.053 "adrfam": "ipv4", 00:38:20.053 "trsvcid": "$NVMF_PORT", 00:38:20.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:20.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:20.053 "hdgst": ${hdgst:-false}, 00:38:20.053 "ddgst": ${ddgst:-false} 00:38:20.053 }, 00:38:20.053 "method": "bdev_nvme_attach_controller" 00:38:20.053 } 00:38:20.053 EOF 00:38:20.053 )") 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:20.053 { 00:38:20.053 "params": { 00:38:20.053 "name": "Nvme$subsystem", 00:38:20.053 "trtype": "$TEST_TRANSPORT", 00:38:20.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:20.053 "adrfam": "ipv4", 00:38:20.053 "trsvcid": "$NVMF_PORT", 00:38:20.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:20.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:20.053 "hdgst": ${hdgst:-false}, 00:38:20.053 "ddgst": ${ddgst:-false} 00:38:20.053 }, 00:38:20.053 "method": "bdev_nvme_attach_controller" 00:38:20.053 } 00:38:20.053 EOF 00:38:20.053 )") 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 370711 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:20.053 "params": { 00:38:20.053 "name": "Nvme1", 00:38:20.053 "trtype": "tcp", 00:38:20.053 "traddr": "10.0.0.2", 00:38:20.053 "adrfam": "ipv4", 00:38:20.053 "trsvcid": "4420", 00:38:20.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:20.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:20.053 "hdgst": false, 00:38:20.053 "ddgst": false 00:38:20.053 }, 00:38:20.053 "method": "bdev_nvme_attach_controller" 00:38:20.053 }' 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:20.053 "params": { 00:38:20.053 "name": "Nvme1", 00:38:20.053 "trtype": "tcp", 00:38:20.053 "traddr": "10.0.0.2", 00:38:20.053 "adrfam": "ipv4", 00:38:20.053 "trsvcid": "4420", 00:38:20.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:20.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:20.053 "hdgst": false, 00:38:20.053 "ddgst": false 00:38:20.053 }, 00:38:20.053 "method": "bdev_nvme_attach_controller" 00:38:20.053 }' 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:20.053 "params": { 00:38:20.053 "name": "Nvme1", 00:38:20.053 "trtype": "tcp", 00:38:20.053 "traddr": "10.0.0.2", 00:38:20.053 "adrfam": "ipv4", 00:38:20.053 "trsvcid": "4420", 00:38:20.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:20.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:20.053 "hdgst": false, 00:38:20.053 "ddgst": false 00:38:20.053 }, 00:38:20.053 "method": "bdev_nvme_attach_controller" 00:38:20.053 }' 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:20.053 00:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:20.053 "params": { 00:38:20.053 "name": "Nvme1", 00:38:20.053 "trtype": "tcp", 00:38:20.053 "traddr": "10.0.0.2", 00:38:20.053 "adrfam": "ipv4", 00:38:20.053 "trsvcid": "4420", 00:38:20.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:20.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:20.053 "hdgst": false, 00:38:20.053 "ddgst": false 00:38:20.053 }, 00:38:20.053 "method": "bdev_nvme_attach_controller" 00:38:20.053 }' 00:38:20.053 [2024-11-20 00:02:54.175710] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:38:20.053 [2024-11-20 00:02:54.175705] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:38:20.053 [2024-11-20 00:02:54.175796] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 00:02:54.175796] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:20.053 --proc-type=auto ] 00:38:20.053 [2024-11-20 00:02:54.176085] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:38:20.053 [2024-11-20 00:02:54.176145] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:20.053 [2024-11-20 00:02:54.177471] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:38:20.053 [2024-11-20 00:02:54.177544] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:20.053 [2024-11-20 00:02:54.358482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.311 [2024-11-20 00:02:54.400108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:20.311 [2024-11-20 00:02:54.457451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.311 [2024-11-20 00:02:54.501422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:20.311 [2024-11-20 00:02:54.561135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.311 [2024-11-20 00:02:54.601235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:20.570 [2024-11-20 00:02:54.629244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.570 [2024-11-20 00:02:54.666716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:20.570 Running I/O for 1 seconds... 00:38:20.570 Running I/O for 1 seconds... 00:38:20.828 Running I/O for 1 seconds... 00:38:20.828 Running I/O for 1 seconds... 00:38:21.762 4941.00 IOPS, 19.30 MiB/s [2024-11-19T23:02:56.074Z] 10776.00 IOPS, 42.09 MiB/s 00:38:21.762 Latency(us) 00:38:21.762 [2024-11-19T23:02:56.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.762 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:21.762 Nvme1n1 : 1.02 4960.80 19.38 0.00 0.00 25485.83 5267.15 36117.62 00:38:21.762 [2024-11-19T23:02:56.074Z] =================================================================================================================== 00:38:21.762 [2024-11-19T23:02:56.074Z] Total : 4960.80 19.38 0.00 0.00 25485.83 5267.15 36117.62 00:38:21.762 00:38:21.762 Latency(us) 00:38:21.762 [2024-11-19T23:02:56.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.762 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:21.762 Nvme1n1 : 1.01 10841.89 42.35 0.00 0.00 11764.33 4781.70 16893.72 00:38:21.762 [2024-11-19T23:02:56.074Z] =================================================================================================================== 00:38:21.762 [2024-11-19T23:02:56.074Z] Total : 10841.89 42.35 0.00 0.00 11764.33 4781.70 16893.72 00:38:21.762 5323.00 IOPS, 20.79 MiB/s 00:38:21.762 Latency(us) 00:38:21.762 [2024-11-19T23:02:56.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.762 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:21.762 Nvme1n1 : 1.01 5422.83 21.18 0.00 0.00 23522.74 5072.97 51263.72 00:38:21.762 [2024-11-19T23:02:56.074Z] =================================================================================================================== 00:38:21.762 [2024-11-19T23:02:56.074Z] Total : 5422.83 21.18 0.00 0.00 23522.74 5072.97 51263.72 00:38:21.762 200344.00 IOPS, 782.59 MiB/s 00:38:21.762 Latency(us) 00:38:21.762 [2024-11-19T23:02:56.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.762 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:21.762 Nvme1n1 : 1.00 199970.06 781.13 0.00 0.00 636.51 297.34 1856.85 00:38:21.762 [2024-11-19T23:02:56.074Z] =================================================================================================================== 00:38:21.762 [2024-11-19T23:02:56.074Z] Total : 199970.06 781.13 0.00 0.00 636.51 297.34 1856.85 00:38:21.762 00:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 370713 00:38:21.762 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 370715 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 370718 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:22.026 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:22.026 rmmod nvme_tcp 00:38:22.026 rmmod nvme_fabrics 00:38:22.026 rmmod nvme_keyring 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 370684 ']' 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 370684 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 370684 ']' 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 370684 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 370684 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 370684' 00:38:22.027 killing process with pid 370684 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 370684 00:38:22.027 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 370684 00:38:22.289 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:22.289 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:22.289 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:22.289 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:22.289 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:22.289 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:22.289 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:22.289 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:22.289 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:22.289 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.289 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:22.289 00:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:24.192 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:24.192 00:38:24.192 real 0m7.086s 00:38:24.192 user 0m13.793s 00:38:24.192 sys 0m3.906s 00:38:24.192 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:24.192 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:24.192 ************************************ 00:38:24.192 END TEST nvmf_bdev_io_wait 00:38:24.192 ************************************ 00:38:24.192 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:24.192 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:24.192 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:24.192 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:24.192 ************************************ 00:38:24.192 START TEST nvmf_queue_depth 00:38:24.192 ************************************ 00:38:24.192 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:24.192 * Looking for test storage... 00:38:24.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:24.192 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:24.192 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:24.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.451 --rc genhtml_branch_coverage=1 00:38:24.451 --rc genhtml_function_coverage=1 00:38:24.451 --rc genhtml_legend=1 00:38:24.451 --rc geninfo_all_blocks=1 00:38:24.451 --rc geninfo_unexecuted_blocks=1 00:38:24.451 00:38:24.451 ' 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:24.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.451 --rc genhtml_branch_coverage=1 00:38:24.451 --rc genhtml_function_coverage=1 00:38:24.451 --rc genhtml_legend=1 00:38:24.451 --rc geninfo_all_blocks=1 00:38:24.451 --rc geninfo_unexecuted_blocks=1 00:38:24.451 00:38:24.451 ' 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:24.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.451 --rc genhtml_branch_coverage=1 00:38:24.451 --rc genhtml_function_coverage=1 00:38:24.451 --rc genhtml_legend=1 00:38:24.451 --rc geninfo_all_blocks=1 00:38:24.451 --rc geninfo_unexecuted_blocks=1 00:38:24.451 00:38:24.451 ' 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:24.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.451 --rc genhtml_branch_coverage=1 00:38:24.451 --rc genhtml_function_coverage=1 00:38:24.451 --rc genhtml_legend=1 00:38:24.451 --rc geninfo_all_blocks=1 00:38:24.451 --rc geninfo_unexecuted_blocks=1 00:38:24.451 00:38:24.451 ' 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:24.451 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:24.452 00:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:26.980 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:26.981 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:26.981 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:26.981 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:26.981 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:26.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:26.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:38:26.981 00:38:26.981 --- 10.0.0.2 ping statistics --- 00:38:26.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:26.981 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:26.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:26.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:38:26.981 00:38:26.981 --- 10.0.0.1 ping statistics --- 00:38:26.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:26.981 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:26.981 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:26.982 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:26.982 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=372988 00:38:26.982 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:26.982 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 372988 00:38:26.982 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 372988 ']' 00:38:26.982 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:26.982 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:26.982 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:26.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:26.982 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:26.982 00:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:26.982 [2024-11-20 00:03:00.870323] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:26.982 [2024-11-20 00:03:00.871437] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:38:26.982 [2024-11-20 00:03:00.871496] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:26.982 [2024-11-20 00:03:00.952445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.982 [2024-11-20 00:03:01.003782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:26.982 [2024-11-20 00:03:01.003838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:26.982 [2024-11-20 00:03:01.003862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:26.982 [2024-11-20 00:03:01.003874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:26.982 [2024-11-20 00:03:01.003885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:26.982 [2024-11-20 00:03:01.004498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:26.982 [2024-11-20 00:03:01.092605] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:26.982 [2024-11-20 00:03:01.092933] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:26.982 [2024-11-20 00:03:01.145143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:26.982 Malloc0 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:26.982 [2024-11-20 00:03:01.205231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=373036 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 373036 /var/tmp/bdevperf.sock 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 373036 ']' 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:26.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:26.982 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:26.982 [2024-11-20 00:03:01.257167] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:38:26.982 [2024-11-20 00:03:01.257235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373036 ] 00:38:27.241 [2024-11-20 00:03:01.333900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.241 [2024-11-20 00:03:01.389446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.241 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:27.241 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:27.241 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:27.241 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.241 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:27.499 NVMe0n1 00:38:27.499 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.499 00:03:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:27.499 Running I/O for 10 seconds... 00:38:29.819 7881.00 IOPS, 30.79 MiB/s [2024-11-19T23:03:04.697Z] 7981.50 IOPS, 31.18 MiB/s [2024-11-19T23:03:06.068Z] 8002.67 IOPS, 31.26 MiB/s [2024-11-19T23:03:07.004Z] 8053.50 IOPS, 31.46 MiB/s [2024-11-19T23:03:07.937Z] 8059.80 IOPS, 31.48 MiB/s [2024-11-19T23:03:08.888Z] 8087.17 IOPS, 31.59 MiB/s [2024-11-19T23:03:09.981Z] 8090.71 IOPS, 31.60 MiB/s [2024-11-19T23:03:10.918Z] 8098.12 IOPS, 31.63 MiB/s [2024-11-19T23:03:11.858Z] 8146.11 IOPS, 31.82 MiB/s [2024-11-19T23:03:11.858Z] 8180.10 IOPS, 31.95 MiB/s 00:38:37.546 Latency(us) 00:38:37.546 [2024-11-19T23:03:11.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.546 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:37.546 Verification LBA range: start 0x0 length 0x4000 00:38:37.546 NVMe0n1 : 10.10 8190.03 31.99 0.00 0.00 124422.40 25049.32 77283.93 00:38:37.546 [2024-11-19T23:03:11.858Z] =================================================================================================================== 00:38:37.546 [2024-11-19T23:03:11.858Z] Total : 8190.03 31.99 0.00 0.00 124422.40 25049.32 77283.93 00:38:37.546 { 00:38:37.546 "results": [ 00:38:37.546 { 00:38:37.546 "job": "NVMe0n1", 00:38:37.546 "core_mask": "0x1", 00:38:37.546 "workload": "verify", 00:38:37.546 "status": "finished", 00:38:37.546 "verify_range": { 00:38:37.546 "start": 0, 00:38:37.546 "length": 16384 00:38:37.546 }, 00:38:37.546 "queue_depth": 1024, 00:38:37.546 "io_size": 4096, 00:38:37.546 "runtime": 10.099107, 00:38:37.546 "iops": 8190.031059181767, 00:38:37.546 "mibps": 31.992308824928777, 00:38:37.546 "io_failed": 0, 00:38:37.546 "io_timeout": 0, 00:38:37.546 "avg_latency_us": 124422.3954284568, 00:38:37.546 "min_latency_us": 25049.315555555557, 00:38:37.546 "max_latency_us": 77283.93481481481 00:38:37.546 } 00:38:37.546 ], 00:38:37.546 "core_count": 1 00:38:37.546 } 00:38:37.546 00:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 373036 00:38:37.546 00:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 373036 ']' 00:38:37.546 00:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 373036 00:38:37.546 00:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:37.546 00:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:37.546 00:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 373036 00:38:37.804 00:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:37.804 00:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:37.804 00:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 373036' 00:38:37.804 killing process with pid 373036 00:38:37.804 00:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 373036 00:38:37.804 Received shutdown signal, test time was about 10.000000 seconds 00:38:37.804 00:38:37.804 Latency(us) 00:38:37.804 [2024-11-19T23:03:12.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.804 [2024-11-19T23:03:12.116Z] =================================================================================================================== 00:38:37.804 [2024-11-19T23:03:12.117Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:37.805 00:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 373036 00:38:37.805 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:37.805 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:37.805 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:37.805 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:37.805 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:37.805 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:37.805 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:37.805 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:37.805 rmmod nvme_tcp 00:38:37.805 rmmod nvme_fabrics 00:38:37.805 rmmod nvme_keyring 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 372988 ']' 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 372988 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 372988 ']' 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 372988 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 372988 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 372988' 00:38:38.063 killing process with pid 372988 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 372988 00:38:38.063 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 372988 00:38:38.321 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:38.321 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:38.321 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:38.321 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:38.321 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:38:38.321 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:38.321 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:38:38.321 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:38.321 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:38.321 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.321 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.321 00:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.225 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:40.225 00:38:40.225 real 0m16.016s 00:38:40.225 user 0m22.051s 00:38:40.225 sys 0m3.411s 00:38:40.225 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:40.225 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:40.225 ************************************ 00:38:40.225 END TEST nvmf_queue_depth 00:38:40.225 ************************************ 00:38:40.225 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:40.225 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:40.225 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:40.225 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:40.225 ************************************ 00:38:40.225 START TEST nvmf_target_multipath 00:38:40.225 ************************************ 00:38:40.225 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:40.483 * Looking for test storage... 00:38:40.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:40.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.483 --rc genhtml_branch_coverage=1 00:38:40.483 --rc genhtml_function_coverage=1 00:38:40.483 --rc genhtml_legend=1 00:38:40.483 --rc geninfo_all_blocks=1 00:38:40.483 --rc geninfo_unexecuted_blocks=1 00:38:40.483 00:38:40.483 ' 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:40.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.483 --rc genhtml_branch_coverage=1 00:38:40.483 --rc genhtml_function_coverage=1 00:38:40.483 --rc genhtml_legend=1 00:38:40.483 --rc geninfo_all_blocks=1 00:38:40.483 --rc geninfo_unexecuted_blocks=1 00:38:40.483 00:38:40.483 ' 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:40.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.483 --rc genhtml_branch_coverage=1 00:38:40.483 --rc genhtml_function_coverage=1 00:38:40.483 --rc genhtml_legend=1 00:38:40.483 --rc geninfo_all_blocks=1 00:38:40.483 --rc geninfo_unexecuted_blocks=1 00:38:40.483 00:38:40.483 ' 00:38:40.483 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:40.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.483 --rc genhtml_branch_coverage=1 00:38:40.483 --rc genhtml_function_coverage=1 00:38:40.484 --rc genhtml_legend=1 00:38:40.484 --rc geninfo_all_blocks=1 00:38:40.484 --rc geninfo_unexecuted_blocks=1 00:38:40.484 00:38:40.484 ' 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:40.484 00:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:42.383 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:42.383 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:42.383 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:42.383 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:42.383 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:42.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:42.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:38:42.642 00:38:42.642 --- 10.0.0.2 ping statistics --- 00:38:42.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.642 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:42.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:42.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:38:42.642 00:38:42.642 --- 10.0.0.1 ping statistics --- 00:38:42.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.642 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:42.642 only one NIC for nvmf test 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:42.642 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:42.643 rmmod nvme_tcp 00:38:42.643 rmmod nvme_fabrics 00:38:42.643 rmmod nvme_keyring 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:42.643 00:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:45.173 00:38:45.173 real 0m4.471s 00:38:45.173 user 0m0.902s 00:38:45.173 sys 0m1.539s 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:45.173 00:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:45.173 ************************************ 00:38:45.173 END TEST nvmf_target_multipath 00:38:45.173 ************************************ 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:45.173 ************************************ 00:38:45.173 START TEST nvmf_zcopy 00:38:45.173 ************************************ 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:45.173 * Looking for test storage... 00:38:45.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:45.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.173 --rc genhtml_branch_coverage=1 00:38:45.173 --rc genhtml_function_coverage=1 00:38:45.173 --rc genhtml_legend=1 00:38:45.173 --rc geninfo_all_blocks=1 00:38:45.173 --rc geninfo_unexecuted_blocks=1 00:38:45.173 00:38:45.173 ' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:45.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.173 --rc genhtml_branch_coverage=1 00:38:45.173 --rc genhtml_function_coverage=1 00:38:45.173 --rc genhtml_legend=1 00:38:45.173 --rc geninfo_all_blocks=1 00:38:45.173 --rc geninfo_unexecuted_blocks=1 00:38:45.173 00:38:45.173 ' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:45.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.173 --rc genhtml_branch_coverage=1 00:38:45.173 --rc genhtml_function_coverage=1 00:38:45.173 --rc genhtml_legend=1 00:38:45.173 --rc geninfo_all_blocks=1 00:38:45.173 --rc geninfo_unexecuted_blocks=1 00:38:45.173 00:38:45.173 ' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:45.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.173 --rc genhtml_branch_coverage=1 00:38:45.173 --rc genhtml_function_coverage=1 00:38:45.173 --rc genhtml_legend=1 00:38:45.173 --rc geninfo_all_blocks=1 00:38:45.173 --rc geninfo_unexecuted_blocks=1 00:38:45.173 00:38:45.173 ' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:45.173 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:45.174 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.174 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.174 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.174 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:45.174 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:45.174 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:45.174 00:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.072 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:47.072 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:47.072 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:47.072 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:47.072 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:47.072 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:47.072 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:47.073 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:47.073 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:47.073 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:47.073 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:47.073 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:47.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:47.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:38:47.074 00:38:47.074 --- 10.0.0.2 ping statistics --- 00:38:47.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.074 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:47.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:47.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:38:47.074 00:38:47.074 --- 10.0.0.1 ping statistics --- 00:38:47.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.074 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:47.074 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=378745 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 378745 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 378745 ']' 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:47.338 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.338 [2024-11-20 00:03:21.449834] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:47.338 [2024-11-20 00:03:21.450904] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:38:47.338 [2024-11-20 00:03:21.450981] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:47.338 [2024-11-20 00:03:21.523909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.338 [2024-11-20 00:03:21.570832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:47.338 [2024-11-20 00:03:21.570903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:47.338 [2024-11-20 00:03:21.570929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:47.338 [2024-11-20 00:03:21.570943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:47.338 [2024-11-20 00:03:21.570954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:47.338 [2024-11-20 00:03:21.571610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:47.596 [2024-11-20 00:03:21.659669] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:47.596 [2024-11-20 00:03:21.660019] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.596 [2024-11-20 00:03:21.712293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.596 [2024-11-20 00:03:21.728481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.596 malloc0 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:47.596 { 00:38:47.596 "params": { 00:38:47.596 "name": "Nvme$subsystem", 00:38:47.596 "trtype": "$TEST_TRANSPORT", 00:38:47.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:47.596 "adrfam": "ipv4", 00:38:47.596 "trsvcid": "$NVMF_PORT", 00:38:47.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:47.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:47.596 "hdgst": ${hdgst:-false}, 00:38:47.596 "ddgst": ${ddgst:-false} 00:38:47.596 }, 00:38:47.596 "method": "bdev_nvme_attach_controller" 00:38:47.596 } 00:38:47.596 EOF 00:38:47.596 )") 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:47.596 00:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:47.596 "params": { 00:38:47.596 "name": "Nvme1", 00:38:47.596 "trtype": "tcp", 00:38:47.596 "traddr": "10.0.0.2", 00:38:47.596 "adrfam": "ipv4", 00:38:47.596 "trsvcid": "4420", 00:38:47.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:47.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:47.596 "hdgst": false, 00:38:47.596 "ddgst": false 00:38:47.596 }, 00:38:47.596 "method": "bdev_nvme_attach_controller" 00:38:47.596 }' 00:38:47.596 [2024-11-20 00:03:21.806202] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:38:47.596 [2024-11-20 00:03:21.806297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378769 ] 00:38:47.596 [2024-11-20 00:03:21.879334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.855 [2024-11-20 00:03:21.928945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.855 Running I/O for 10 seconds... 00:38:50.162 5486.00 IOPS, 42.86 MiB/s [2024-11-19T23:03:25.417Z] 5521.50 IOPS, 43.14 MiB/s [2024-11-19T23:03:26.351Z] 5545.00 IOPS, 43.32 MiB/s [2024-11-19T23:03:27.287Z] 5559.50 IOPS, 43.43 MiB/s [2024-11-19T23:03:28.221Z] 5552.80 IOPS, 43.38 MiB/s [2024-11-19T23:03:29.594Z] 5557.33 IOPS, 43.42 MiB/s [2024-11-19T23:03:30.526Z] 5557.43 IOPS, 43.42 MiB/s [2024-11-19T23:03:31.461Z] 5559.25 IOPS, 43.43 MiB/s [2024-11-19T23:03:32.397Z] 5560.89 IOPS, 43.44 MiB/s [2024-11-19T23:03:32.397Z] 5561.40 IOPS, 43.45 MiB/s 00:38:58.085 Latency(us) 00:38:58.085 [2024-11-19T23:03:32.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:58.085 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:58.085 Verification LBA range: start 0x0 length 0x1000 00:38:58.085 Nvme1n1 : 10.02 5565.28 43.48 0.00 0.00 22937.95 1517.04 30486.38 00:38:58.085 [2024-11-19T23:03:32.397Z] =================================================================================================================== 00:38:58.085 [2024-11-19T23:03:32.397Z] Total : 5565.28 43.48 0.00 0.00 22937.95 1517.04 30486.38 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=380017 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:58.085 { 00:38:58.085 "params": { 00:38:58.085 "name": "Nvme$subsystem", 00:38:58.085 "trtype": "$TEST_TRANSPORT", 00:38:58.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:58.085 "adrfam": "ipv4", 00:38:58.085 "trsvcid": "$NVMF_PORT", 00:38:58.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:58.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:58.085 "hdgst": ${hdgst:-false}, 00:38:58.085 "ddgst": ${ddgst:-false} 00:38:58.085 }, 00:38:58.085 "method": "bdev_nvme_attach_controller" 00:38:58.085 } 00:38:58.085 EOF 00:38:58.085 )") 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:58.085 [2024-11-20 00:03:32.392235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.085 [2024-11-20 00:03:32.392280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:58.085 00:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:58.085 "params": { 00:38:58.085 "name": "Nvme1", 00:38:58.085 "trtype": "tcp", 00:38:58.085 "traddr": "10.0.0.2", 00:38:58.085 "adrfam": "ipv4", 00:38:58.085 "trsvcid": "4420", 00:38:58.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:58.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:58.085 "hdgst": false, 00:38:58.085 "ddgst": false 00:38:58.085 }, 00:38:58.085 "method": "bdev_nvme_attach_controller" 00:38:58.085 }' 00:38:58.343 [2024-11-20 00:03:32.400151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.343 [2024-11-20 00:03:32.400179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.343 [2024-11-20 00:03:32.408129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.343 [2024-11-20 00:03:32.408157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.343 [2024-11-20 00:03:32.416137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.343 [2024-11-20 00:03:32.416165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.343 [2024-11-20 00:03:32.424144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.343 [2024-11-20 00:03:32.424170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.343 [2024-11-20 00:03:32.432145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.343 [2024-11-20 00:03:32.432178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.343 [2024-11-20 00:03:32.432913] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:38:58.343 [2024-11-20 00:03:32.432973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380017 ] 00:38:58.343 [2024-11-20 00:03:32.440129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.343 [2024-11-20 00:03:32.440158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.343 [2024-11-20 00:03:32.448129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.343 [2024-11-20 00:03:32.448165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.343 [2024-11-20 00:03:32.456129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.343 [2024-11-20 00:03:32.456154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.343 [2024-11-20 00:03:32.464128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.343 [2024-11-20 00:03:32.464153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.343 [2024-11-20 00:03:32.472153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.343 [2024-11-20 00:03:32.472179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.343 [2024-11-20 00:03:32.480150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.343 [2024-11-20 00:03:32.480177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.343 [2024-11-20 00:03:32.488147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.488173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.496149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.496174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.504147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.504173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.508552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.344 [2024-11-20 00:03:32.512151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.512177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.520208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.520250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.528180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.528215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.536148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.536175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.544149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.544175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.552146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.552172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.560154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.560181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.561524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.344 [2024-11-20 00:03:32.568149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.568175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.576159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.576190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.584179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.584219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.596512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.596575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.608229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.608287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.620231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.620290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.632195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.632243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.640171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.640203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.344 [2024-11-20 00:03:32.652245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.344 [2024-11-20 00:03:32.652304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.664214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.664266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.672151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.672178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.680150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.680176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.688151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.688178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.696149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.696175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.704151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.704177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.712153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.712178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.720152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.720178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.728153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.728178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.736973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.737014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.744153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.744179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 Running I/O for 5 seconds... 00:38:58.602 [2024-11-20 00:03:32.752150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.752176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.768823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.768852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.788528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.788557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.806914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.806941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.817155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.817183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.834420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.834447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.849641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.849670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.866259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.866287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.882970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.883012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.896917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.896946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.602 [2024-11-20 00:03:32.907315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.602 [2024-11-20 00:03:32.907343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:32.922710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:32.922742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:32.933353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:32.933400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:32.950111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:32.950157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:32.966556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:32.966599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:32.979350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:32.979379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:32.990000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:32.990046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:33.005156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:33.005197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:33.022621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:33.022653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:33.037997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:33.038026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:33.056552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:33.056589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:33.074032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:33.074060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:33.091448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:33.091481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:33.102055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:33.102100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:33.118207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:33.118234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:33.134344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:33.134372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:33.145232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:33.145274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.860 [2024-11-20 00:03:33.162065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.860 [2024-11-20 00:03:33.162108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.178621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.178664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.189418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.189444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.205465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.205491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.222386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.222415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.233092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.233119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.249971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.250004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.266585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.266628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.277292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.277325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.292231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.292259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.302574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.302603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.317156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.317186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.335909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.335941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.346383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.346411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.361225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.361253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.377987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.378029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.394591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.394619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.405151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.405179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.119 [2024-11-20 00:03:33.422191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.119 [2024-11-20 00:03:33.422217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.437940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.437968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.454268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.454296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.470661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.470693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.481280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.481309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.497468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.497503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.512839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.512881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.532673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.532700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.551854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.551883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.562407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.562435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.579166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.579193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.589596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.589622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.604797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.604822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.624343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.377 [2024-11-20 00:03:33.624385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.377 [2024-11-20 00:03:33.635036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.378 [2024-11-20 00:03:33.635088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.378 [2024-11-20 00:03:33.649984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.378 [2024-11-20 00:03:33.650016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.378 [2024-11-20 00:03:33.666668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.378 [2024-11-20 00:03:33.666700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.378 [2024-11-20 00:03:33.681907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.378 [2024-11-20 00:03:33.681948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.635 [2024-11-20 00:03:33.698287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.635 [2024-11-20 00:03:33.698315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.635 [2024-11-20 00:03:33.714120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.635 [2024-11-20 00:03:33.714148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.635 [2024-11-20 00:03:33.730829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.635 [2024-11-20 00:03:33.730857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.635 [2024-11-20 00:03:33.741378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.741433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 10899.00 IOPS, 85.15 MiB/s [2024-11-19T23:03:33.948Z] [2024-11-20 00:03:33.756456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.756483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 [2024-11-20 00:03:33.766605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.766636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 [2024-11-20 00:03:33.781449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.781476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 [2024-11-20 00:03:33.798641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.798669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 [2024-11-20 00:03:33.809241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.809267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 [2024-11-20 00:03:33.826283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.826311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 [2024-11-20 00:03:33.841948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.841990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 [2024-11-20 00:03:33.857922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.857964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 [2024-11-20 00:03:33.873636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.873666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 [2024-11-20 00:03:33.892144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.892170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 [2024-11-20 00:03:33.903322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.903354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 [2024-11-20 00:03:33.917786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.917817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.636 [2024-11-20 00:03:33.934426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.636 [2024-11-20 00:03:33.934471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:33.949605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:33.949633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:33.966553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:33.966594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:33.977364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:33.977391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:33.994088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:33.994133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.009856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.009883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.025864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.025891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.042699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.042726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.053180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.053218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.068916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.068942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.086460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.086501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.099614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.099641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.110041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.110077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.125383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.125433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.144522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.144554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.163417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.163443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.173479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.173505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.188235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.188262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.893 [2024-11-20 00:03:34.198182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.893 [2024-11-20 00:03:34.198208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.213085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.213127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.229995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.230037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.246361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.246405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.257108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.257138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.274216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.274244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.290790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.290818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.306738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.306766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.317340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.317367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.334120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.334146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.348472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.348501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.368564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.368608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.388759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.388785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.399428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.399474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.414323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.414360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.430946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.430974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.441739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.441771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.152 [2024-11-20 00:03:34.457000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.152 [2024-11-20 00:03:34.457029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.410 [2024-11-20 00:03:34.474257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.410 [2024-11-20 00:03:34.474285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.410 [2024-11-20 00:03:34.488632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.410 [2024-11-20 00:03:34.488660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.410 [2024-11-20 00:03:34.499601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.410 [2024-11-20 00:03:34.499633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.410 [2024-11-20 00:03:34.514486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.410 [2024-11-20 00:03:34.514519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.410 [2024-11-20 00:03:34.526221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.410 [2024-11-20 00:03:34.526254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.410 [2024-11-20 00:03:34.541693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.410 [2024-11-20 00:03:34.541725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.410 [2024-11-20 00:03:34.558697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.410 [2024-11-20 00:03:34.558729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.410 [2024-11-20 00:03:34.569731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.410 [2024-11-20 00:03:34.569769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.410 [2024-11-20 00:03:34.585304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.411 [2024-11-20 00:03:34.585338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.411 [2024-11-20 00:03:34.602405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.411 [2024-11-20 00:03:34.602449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.411 [2024-11-20 00:03:34.617964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.411 [2024-11-20 00:03:34.617996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.411 [2024-11-20 00:03:34.629635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.411 [2024-11-20 00:03:34.629668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.411 [2024-11-20 00:03:34.645542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.411 [2024-11-20 00:03:34.645575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.411 [2024-11-20 00:03:34.662003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.411 [2024-11-20 00:03:34.662036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.411 [2024-11-20 00:03:34.677723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.411 [2024-11-20 00:03:34.677766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.411 [2024-11-20 00:03:34.694566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.411 [2024-11-20 00:03:34.694609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.411 [2024-11-20 00:03:34.705602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.411 [2024-11-20 00:03:34.705629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.669 [2024-11-20 00:03:34.722749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.669 [2024-11-20 00:03:34.722779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.669 [2024-11-20 00:03:34.734024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.734056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.749101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.749150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 10833.50 IOPS, 84.64 MiB/s [2024-11-19T23:03:34.982Z] [2024-11-20 00:03:34.768657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.768689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.785995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.786027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.802507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.802548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.816865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.816897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.834601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.834645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.845495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.845522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.862037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.862091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.878682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.878709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.889048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.889093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.906057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.906093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.921751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.921779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.938456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.938485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.953765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.953794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.670 [2024-11-20 00:03:34.972189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.670 [2024-11-20 00:03:34.972216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:34.982592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:34.982621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:34.998232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:34.998260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.012666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.012695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.023137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.023165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.037914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.037941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.054681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.054724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.068668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.068696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.079338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.079365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.094351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.094378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.105156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.105183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.120711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.120739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.140542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.140569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.151126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.151170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.166359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.166391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.177256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.177283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.193688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.193715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.210511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.210557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.928 [2024-11-20 00:03:35.225996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.928 [2024-11-20 00:03:35.226025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.186 [2024-11-20 00:03:35.242653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.186 [2024-11-20 00:03:35.242695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.186 [2024-11-20 00:03:35.253490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.186 [2024-11-20 00:03:35.253523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.186 [2024-11-20 00:03:35.268229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.186 [2024-11-20 00:03:35.268262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.186 [2024-11-20 00:03:35.278373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.278421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.292958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.292991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.312338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.312382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.323187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.323214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.334518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.334545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.348583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.348611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.359021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.359049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.374103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.374149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.389639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.389666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.408671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.408699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.425602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.425631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.442293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.442321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.455578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.455606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.466046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.466091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.187 [2024-11-20 00:03:35.482021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.187 [2024-11-20 00:03:35.482062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.497714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.497743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.514343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.514371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.529273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.529303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.546122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.546151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.562720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.562766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.575515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.575543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.585642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.585674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.600921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.600954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.620800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.620826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.640467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.640499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.651148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.651175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.666112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.666139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.682648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.682676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.693009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.693041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.709693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.709719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.726485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.726514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.737510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.737538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.445 [2024-11-20 00:03:35.753249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.445 [2024-11-20 00:03:35.753292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 10852.67 IOPS, 84.79 MiB/s [2024-11-19T23:03:36.016Z] [2024-11-20 00:03:35.770506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.770534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.783876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.783904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.794398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.794436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.810198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.810225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.826593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.826634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.840950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.840982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.860264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.860297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.870755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.870787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.883455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.883487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.895151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.895177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.906877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.906909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.918366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.918406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.932407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.932435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.942682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.942714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.957659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.957691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.971276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.971305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.981607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.981639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.704 [2024-11-20 00:03:35.996833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.704 [2024-11-20 00:03:35.996860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.015476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.015517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.025718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.025750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.041740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.041767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.058317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.058369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.073945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.073987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.090246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.090275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.106848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.106892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.117226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.117252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.133187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.133226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.150781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.150825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.161206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.161233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.178240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.178266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.189643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.189675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.205016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.205048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.221856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.221898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.238884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.238926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.249669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.249696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.969 [2024-11-20 00:03:36.265484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:01.969 [2024-11-20 00:03:36.265516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.226 [2024-11-20 00:03:36.280641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.226 [2024-11-20 00:03:36.280671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.226 [2024-11-20 00:03:36.301127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.226 [2024-11-20 00:03:36.301156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.226 [2024-11-20 00:03:36.318609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.318637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.331579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.331607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.341771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.341810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.357383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.357423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.374385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.374412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.385383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.385415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.401186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.401213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.417714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.417742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.435470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.435502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.446622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.446654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.461828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.461870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.473081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.473108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.490547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.490579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.501186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.501214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.519355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.519386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.227 [2024-11-20 00:03:36.529534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.227 [2024-11-20 00:03:36.529562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.546437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.546464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.562361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.562406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.577834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.577862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.594659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.594687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.607512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.607541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.617559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.617591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.633102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.633147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.646904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.646932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.657238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.657265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.673594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.673621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.693017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.693043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.710975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.711008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.721174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.721202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.737390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.737417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.756943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.756976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 10840.50 IOPS, 84.69 MiB/s [2024-11-19T23:03:36.797Z] [2024-11-20 00:03:36.772535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.772576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.485 [2024-11-20 00:03:36.792783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.485 [2024-11-20 00:03:36.792811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.812870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.812903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.827683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.827711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.837747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.837775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.853503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.853530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.870233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.870262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.886845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.886888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.897136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.897163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.914201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.914229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.925190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.925218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.940802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.940834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.960006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.960042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.970778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.970806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.986253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.986283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:36.997168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:36.997194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:37.012658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:37.012689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:37.029754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:37.029782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.746 [2024-11-20 00:03:37.048011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.746 [2024-11-20 00:03:37.048045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.059021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.059052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.073123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.073152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.090366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.090393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.104758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.104787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.124402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.124433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.135327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.135353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.147774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.147800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.159963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.159993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.171653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.171679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.183513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.183544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.195466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.195498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.207404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.207435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.219988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.220019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.231848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.231879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.243404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.243445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.255360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.097 [2024-11-20 00:03:37.255402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.097 [2024-11-20 00:03:37.267238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.098 [2024-11-20 00:03:37.267264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.098 [2024-11-20 00:03:37.279353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.098 [2024-11-20 00:03:37.279379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.098 [2024-11-20 00:03:37.291060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.098 [2024-11-20 00:03:37.291115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.098 [2024-11-20 00:03:37.303091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.098 [2024-11-20 00:03:37.303135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.098 [2024-11-20 00:03:37.315131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.098 [2024-11-20 00:03:37.315158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.098 [2024-11-20 00:03:37.326999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.098 [2024-11-20 00:03:37.327029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.098 [2024-11-20 00:03:37.338838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.098 [2024-11-20 00:03:37.338869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.098 [2024-11-20 00:03:37.352167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.098 [2024-11-20 00:03:37.352194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.098 [2024-11-20 00:03:37.362665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.098 [2024-11-20 00:03:37.362693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.377656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.377687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.391651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.391678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.402261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.402297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.417598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.417629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.429264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.429290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.444609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.444635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.464575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.464607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.484766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.484797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.502719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.502745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.513557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.513584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.528776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.528807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.549153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.549180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.567200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.567228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.577705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.577737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.592796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.592827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.613277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.613305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.631544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.631575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.641889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.641921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.376 [2024-11-20 00:03:37.658640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.376 [2024-11-20 00:03:37.658665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.377 [2024-11-20 00:03:37.669206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.377 [2024-11-20 00:03:37.669233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.377 [2024-11-20 00:03:37.686200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.377 [2024-11-20 00:03:37.686226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.702394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.702430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.713179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.713206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.728201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.728228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.738115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.738146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.754654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.754697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.765401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.765433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 10836.80 IOPS, 84.66 MiB/s [2024-11-19T23:03:37.947Z] [2024-11-20 00:03:37.776149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.776174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 00:39:03.635 Latency(us) 00:39:03.635 [2024-11-19T23:03:37.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.635 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:03.635 Nvme1n1 : 5.01 10834.66 84.65 0.00 0.00 11796.67 3034.07 19709.35 00:39:03.635 [2024-11-19T23:03:37.947Z] =================================================================================================================== 00:39:03.635 [2024-11-19T23:03:37.947Z] Total : 10834.66 84.65 0.00 0.00 11796.67 3034.07 19709.35 00:39:03.635 [2024-11-20 00:03:37.784154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.784179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.792157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.792182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.800229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.800284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.808225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.808283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.820225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.820287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.832224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.832285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.844229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.844288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.856228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.856289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.868227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.868289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.880204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.880261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.892222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.892280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.904240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.904295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.912171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.912198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.920143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.920174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.928225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.928281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.936199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.936251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.635 [2024-11-20 00:03:37.944152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.635 [2024-11-20 00:03:37.944177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.893 [2024-11-20 00:03:37.952154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.893 [2024-11-20 00:03:37.952178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.893 [2024-11-20 00:03:37.960157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.893 [2024-11-20 00:03:37.960181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.893 [2024-11-20 00:03:37.968153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.893 [2024-11-20 00:03:37.968178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (380017) - No such process 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 380017 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.893 delay0 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.893 00:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:03.893 [2024-11-20 00:03:38.054122] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:12.004 Initializing NVMe Controllers 00:39:12.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:12.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:12.005 Initialization complete. Launching workers. 00:39:12.005 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 260, failed: 14880 00:39:12.005 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15035, failed to submit 105 00:39:12.005 success 14963, unsuccessful 72, failed 0 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:12.005 rmmod nvme_tcp 00:39:12.005 rmmod nvme_fabrics 00:39:12.005 rmmod nvme_keyring 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 378745 ']' 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 378745 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 378745 ']' 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 378745 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 378745 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 378745' 00:39:12.005 killing process with pid 378745 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 378745 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 378745 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:12.005 00:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:13.396 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:13.396 00:39:13.396 real 0m28.519s 00:39:13.396 user 0m40.138s 00:39:13.396 sys 0m10.314s 00:39:13.396 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:13.396 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:13.396 ************************************ 00:39:13.396 END TEST nvmf_zcopy 00:39:13.396 ************************************ 00:39:13.396 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:13.396 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:13.396 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:13.396 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:13.396 ************************************ 00:39:13.396 START TEST nvmf_nmic 00:39:13.396 ************************************ 00:39:13.396 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:13.396 * Looking for test storage... 00:39:13.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:13.396 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:13.396 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:39:13.396 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:13.660 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:13.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.661 --rc genhtml_branch_coverage=1 00:39:13.661 --rc genhtml_function_coverage=1 00:39:13.661 --rc genhtml_legend=1 00:39:13.661 --rc geninfo_all_blocks=1 00:39:13.661 --rc geninfo_unexecuted_blocks=1 00:39:13.661 00:39:13.661 ' 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:13.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.661 --rc genhtml_branch_coverage=1 00:39:13.661 --rc genhtml_function_coverage=1 00:39:13.661 --rc genhtml_legend=1 00:39:13.661 --rc geninfo_all_blocks=1 00:39:13.661 --rc geninfo_unexecuted_blocks=1 00:39:13.661 00:39:13.661 ' 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:13.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.661 --rc genhtml_branch_coverage=1 00:39:13.661 --rc genhtml_function_coverage=1 00:39:13.661 --rc genhtml_legend=1 00:39:13.661 --rc geninfo_all_blocks=1 00:39:13.661 --rc geninfo_unexecuted_blocks=1 00:39:13.661 00:39:13.661 ' 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:13.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.661 --rc genhtml_branch_coverage=1 00:39:13.661 --rc genhtml_function_coverage=1 00:39:13.661 --rc genhtml_legend=1 00:39:13.661 --rc geninfo_all_blocks=1 00:39:13.661 --rc geninfo_unexecuted_blocks=1 00:39:13.661 00:39:13.661 ' 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:13.661 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:13.662 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:13.662 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:13.662 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:13.662 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:13.662 00:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:15.563 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:15.563 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:15.564 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:15.564 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:15.564 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:15.564 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:15.564 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:15.565 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:15.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:15.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:39:15.823 00:39:15.823 --- 10.0.0.2 ping statistics --- 00:39:15.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.823 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:15.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:15.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:39:15.823 00:39:15.823 --- 10.0.0.1 ping statistics --- 00:39:15.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.823 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=383447 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 383447 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 383447 ']' 00:39:15.823 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:15.824 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:15.824 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:15.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:15.824 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:15.824 00:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:15.824 [2024-11-20 00:03:50.004144] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:15.824 [2024-11-20 00:03:50.005565] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:39:15.824 [2024-11-20 00:03:50.005623] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:15.824 [2024-11-20 00:03:50.080644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:15.824 [2024-11-20 00:03:50.128806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:15.824 [2024-11-20 00:03:50.128869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:15.824 [2024-11-20 00:03:50.128898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:15.824 [2024-11-20 00:03:50.128908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:15.824 [2024-11-20 00:03:50.128918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:15.824 [2024-11-20 00:03:50.130649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:15.824 [2024-11-20 00:03:50.130723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:15.824 [2024-11-20 00:03:50.130782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:15.824 [2024-11-20 00:03:50.130786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.082 [2024-11-20 00:03:50.215063] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:16.082 [2024-11-20 00:03:50.215305] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:16.082 [2024-11-20 00:03:50.215619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:16.082 [2024-11-20 00:03:50.216269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:16.082 [2024-11-20 00:03:50.216533] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:16.082 [2024-11-20 00:03:50.275493] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:16.082 Malloc0 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:16.082 [2024-11-20 00:03:50.343702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:16.082 test case1: single bdev can't be used in multiple subsystems 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:16.082 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:16.083 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.083 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:16.083 [2024-11-20 00:03:50.367408] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:16.083 [2024-11-20 00:03:50.367438] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:16.083 [2024-11-20 00:03:50.367469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.083 request: 00:39:16.083 { 00:39:16.083 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:16.083 "namespace": { 00:39:16.083 "bdev_name": "Malloc0", 00:39:16.083 "no_auto_visible": false 00:39:16.083 }, 00:39:16.083 "method": "nvmf_subsystem_add_ns", 00:39:16.083 "req_id": 1 00:39:16.083 } 00:39:16.083 Got JSON-RPC error response 00:39:16.083 response: 00:39:16.083 { 00:39:16.083 "code": -32602, 00:39:16.083 "message": "Invalid parameters" 00:39:16.083 } 00:39:16.083 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:16.083 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:16.083 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:16.083 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:16.083 Adding namespace failed - expected result. 00:39:16.083 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:16.083 test case2: host connect to nvmf target in multiple paths 00:39:16.083 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:16.083 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.083 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:16.083 [2024-11-20 00:03:50.375519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:16.083 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.083 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:16.340 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:16.598 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:16.598 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:16.598 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:16.598 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:16.598 00:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:18.492 00:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:18.493 00:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:18.493 00:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:18.493 00:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:18.493 00:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:18.493 00:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:18.493 00:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:18.493 [global] 00:39:18.493 thread=1 00:39:18.493 invalidate=1 00:39:18.493 rw=write 00:39:18.493 time_based=1 00:39:18.493 runtime=1 00:39:18.493 ioengine=libaio 00:39:18.493 direct=1 00:39:18.493 bs=4096 00:39:18.493 iodepth=1 00:39:18.493 norandommap=0 00:39:18.493 numjobs=1 00:39:18.493 00:39:18.750 verify_dump=1 00:39:18.750 verify_backlog=512 00:39:18.750 verify_state_save=0 00:39:18.750 do_verify=1 00:39:18.750 verify=crc32c-intel 00:39:18.750 [job0] 00:39:18.750 filename=/dev/nvme0n1 00:39:18.750 Could not set queue depth (nvme0n1) 00:39:18.750 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:18.750 fio-3.35 00:39:18.750 Starting 1 thread 00:39:20.125 00:39:20.125 job0: (groupid=0, jobs=1): err= 0: pid=383875: Wed Nov 20 00:03:54 2024 00:39:20.125 read: IOPS=1351, BW=5406KiB/s (5536kB/s)(5552KiB/1027msec) 00:39:20.125 slat (nsec): min=4431, max=64642, avg=11610.59, stdev=7591.95 00:39:20.125 clat (usec): min=207, max=42019, avg=526.27, stdev=3330.62 00:39:20.125 lat (usec): min=218, max=42047, avg=537.88, stdev=3331.76 00:39:20.125 clat percentiles (usec): 00:39:20.125 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 235], 00:39:20.125 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:39:20.125 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 289], 95.00th=[ 388], 00:39:20.125 | 99.00th=[ 502], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:39:20.125 | 99.99th=[42206] 00:39:20.125 write: IOPS=1495, BW=5982KiB/s (6126kB/s)(6144KiB/1027msec); 0 zone resets 00:39:20.125 slat (nsec): min=5827, max=41809, avg=12143.56, stdev=5601.95 00:39:20.125 clat (usec): min=142, max=268, avg=163.50, stdev=12.49 00:39:20.125 lat (usec): min=148, max=297, avg=175.65, stdev=14.91 00:39:20.125 clat percentiles (usec): 00:39:20.125 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 149], 20.00th=[ 153], 00:39:20.125 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:39:20.125 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 186], 00:39:20.125 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 235], 99.95th=[ 269], 00:39:20.125 | 99.99th=[ 269] 00:39:20.125 bw ( KiB/s): min= 1968, max=10320, per=100.00%, avg=6144.00, stdev=5905.76, samples=2 00:39:20.125 iops : min= 492, max= 2580, avg=1536.00, stdev=1476.44, samples=2 00:39:20.125 lat (usec) : 250=81.36%, 500=18.09%, 750=0.24% 00:39:20.125 lat (msec) : 50=0.31% 00:39:20.125 cpu : usr=1.36%, sys=4.00%, ctx=2924, majf=0, minf=1 00:39:20.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.125 issued rwts: total=1388,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:20.125 00:39:20.125 Run status group 0 (all jobs): 00:39:20.125 READ: bw=5406KiB/s (5536kB/s), 5406KiB/s-5406KiB/s (5536kB/s-5536kB/s), io=5552KiB (5685kB), run=1027-1027msec 00:39:20.125 WRITE: bw=5982KiB/s (6126kB/s), 5982KiB/s-5982KiB/s (6126kB/s-6126kB/s), io=6144KiB (6291kB), run=1027-1027msec 00:39:20.125 00:39:20.125 Disk stats (read/write): 00:39:20.125 nvme0n1: ios=1434/1536, merge=0/0, ticks=599/243, in_queue=842, util=91.58% 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:20.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:20.125 rmmod nvme_tcp 00:39:20.125 rmmod nvme_fabrics 00:39:20.125 rmmod nvme_keyring 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:20.125 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 383447 ']' 00:39:20.126 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 383447 00:39:20.126 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 383447 ']' 00:39:20.126 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 383447 00:39:20.126 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:20.126 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:20.126 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383447 00:39:20.126 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:20.126 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:20.126 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383447' 00:39:20.126 killing process with pid 383447 00:39:20.126 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 383447 00:39:20.126 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 383447 00:39:20.384 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:20.384 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:20.384 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:20.384 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:20.384 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:39:20.384 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:20.384 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:39:20.384 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:20.384 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:20.384 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:20.384 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:20.384 00:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:22.918 00:39:22.918 real 0m9.039s 00:39:22.918 user 0m16.566s 00:39:22.918 sys 0m3.512s 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:22.918 ************************************ 00:39:22.918 END TEST nvmf_nmic 00:39:22.918 ************************************ 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:22.918 ************************************ 00:39:22.918 START TEST nvmf_fio_target 00:39:22.918 ************************************ 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:22.918 * Looking for test storage... 00:39:22.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:22.918 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:22.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.919 --rc genhtml_branch_coverage=1 00:39:22.919 --rc genhtml_function_coverage=1 00:39:22.919 --rc genhtml_legend=1 00:39:22.919 --rc geninfo_all_blocks=1 00:39:22.919 --rc geninfo_unexecuted_blocks=1 00:39:22.919 00:39:22.919 ' 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:22.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.919 --rc genhtml_branch_coverage=1 00:39:22.919 --rc genhtml_function_coverage=1 00:39:22.919 --rc genhtml_legend=1 00:39:22.919 --rc geninfo_all_blocks=1 00:39:22.919 --rc geninfo_unexecuted_blocks=1 00:39:22.919 00:39:22.919 ' 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:22.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.919 --rc genhtml_branch_coverage=1 00:39:22.919 --rc genhtml_function_coverage=1 00:39:22.919 --rc genhtml_legend=1 00:39:22.919 --rc geninfo_all_blocks=1 00:39:22.919 --rc geninfo_unexecuted_blocks=1 00:39:22.919 00:39:22.919 ' 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:22.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.919 --rc genhtml_branch_coverage=1 00:39:22.919 --rc genhtml_function_coverage=1 00:39:22.919 --rc genhtml_legend=1 00:39:22.919 --rc geninfo_all_blocks=1 00:39:22.919 --rc geninfo_unexecuted_blocks=1 00:39:22.919 00:39:22.919 ' 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:22.919 00:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:24.821 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:24.821 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:24.821 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:24.821 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:24.821 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:24.822 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:24.822 00:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:24.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:24.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:39:24.822 00:39:24.822 --- 10.0.0.2 ping statistics --- 00:39:24.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.822 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:24.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:24.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:39:24.822 00:39:24.822 --- 10.0.0.1 ping statistics --- 00:39:24.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.822 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=386025 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 386025 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 386025 ']' 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:24.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:24.822 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:25.079 [2024-11-20 00:03:59.130827] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:25.079 [2024-11-20 00:03:59.131910] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:39:25.079 [2024-11-20 00:03:59.131989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:25.079 [2024-11-20 00:03:59.203885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:25.079 [2024-11-20 00:03:59.250048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:25.079 [2024-11-20 00:03:59.250121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:25.079 [2024-11-20 00:03:59.250165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:25.079 [2024-11-20 00:03:59.250177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:25.079 [2024-11-20 00:03:59.250187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:25.079 [2024-11-20 00:03:59.251722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:25.079 [2024-11-20 00:03:59.251774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:25.079 [2024-11-20 00:03:59.251826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:25.079 [2024-11-20 00:03:59.251829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.079 [2024-11-20 00:03:59.333418] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:25.079 [2024-11-20 00:03:59.333647] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:25.079 [2024-11-20 00:03:59.333905] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:25.079 [2024-11-20 00:03:59.334480] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:25.079 [2024-11-20 00:03:59.334710] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:25.079 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:25.079 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:25.079 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:25.079 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:25.079 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:25.079 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:25.079 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:25.336 [2024-11-20 00:03:59.640534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:25.595 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:25.853 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:25.853 00:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:26.111 00:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:26.111 00:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:26.370 00:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:26.370 00:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:26.629 00:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:26.629 00:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:26.887 00:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:27.145 00:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:27.145 00:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:27.710 00:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:27.710 00:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:27.710 00:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:27.710 00:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:27.969 00:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:28.535 00:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:28.535 00:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:28.535 00:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:28.535 00:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:29.102 00:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:29.102 [2024-11-20 00:04:03.396752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:29.360 00:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:29.618 00:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:29.876 00:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:29.876 00:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:29.876 00:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:29.876 00:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:29.876 00:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:29.876 00:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:29.876 00:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:39:32.405 00:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:32.405 00:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:32.405 00:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:32.405 00:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:39:32.405 00:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:32.405 00:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:39:32.405 00:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:32.405 [global] 00:39:32.405 thread=1 00:39:32.405 invalidate=1 00:39:32.405 rw=write 00:39:32.405 time_based=1 00:39:32.405 runtime=1 00:39:32.405 ioengine=libaio 00:39:32.405 direct=1 00:39:32.405 bs=4096 00:39:32.405 iodepth=1 00:39:32.405 norandommap=0 00:39:32.405 numjobs=1 00:39:32.405 00:39:32.405 verify_dump=1 00:39:32.405 verify_backlog=512 00:39:32.405 verify_state_save=0 00:39:32.405 do_verify=1 00:39:32.405 verify=crc32c-intel 00:39:32.405 [job0] 00:39:32.405 filename=/dev/nvme0n1 00:39:32.405 [job1] 00:39:32.405 filename=/dev/nvme0n2 00:39:32.405 [job2] 00:39:32.405 filename=/dev/nvme0n3 00:39:32.405 [job3] 00:39:32.405 filename=/dev/nvme0n4 00:39:32.405 Could not set queue depth (nvme0n1) 00:39:32.405 Could not set queue depth (nvme0n2) 00:39:32.405 Could not set queue depth (nvme0n3) 00:39:32.405 Could not set queue depth (nvme0n4) 00:39:32.405 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:32.405 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:32.405 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:32.405 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:32.405 fio-3.35 00:39:32.405 Starting 4 threads 00:39:33.338 00:39:33.338 job0: (groupid=0, jobs=1): err= 0: pid=386960: Wed Nov 20 00:04:07 2024 00:39:33.338 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:39:33.338 slat (nsec): min=7060, max=17510, avg=15201.82, stdev=2583.18 00:39:33.338 clat (usec): min=40933, max=41256, avg=40998.05, stdev=66.53 00:39:33.338 lat (usec): min=40946, max=41263, avg=41013.26, stdev=65.00 00:39:33.338 clat percentiles (usec): 00:39:33.338 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:33.338 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:33.338 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:33.338 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:33.338 | 99.99th=[41157] 00:39:33.338 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:39:33.338 slat (nsec): min=6143, max=33429, avg=9441.88, stdev=3916.44 00:39:33.338 clat (usec): min=153, max=672, avg=229.09, stdev=65.02 00:39:33.338 lat (usec): min=160, max=689, avg=238.53, stdev=66.52 00:39:33.338 clat percentiles (usec): 00:39:33.338 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 184], 00:39:33.338 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 219], 00:39:33.338 | 70.00th=[ 245], 80.00th=[ 269], 90.00th=[ 322], 95.00th=[ 375], 00:39:33.338 | 99.00th=[ 429], 99.50th=[ 453], 99.90th=[ 676], 99.95th=[ 676], 00:39:33.338 | 99.99th=[ 676] 00:39:33.338 bw ( KiB/s): min= 4096, max= 4096, per=26.13%, avg=4096.00, stdev= 0.00, samples=1 00:39:33.338 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:33.338 lat (usec) : 250=73.22%, 500=22.28%, 750=0.37% 00:39:33.338 lat (msec) : 50=4.12% 00:39:33.338 cpu : usr=0.29%, sys=0.29%, ctx=535, majf=0, minf=1 00:39:33.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.338 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:33.338 job1: (groupid=0, jobs=1): err= 0: pid=386961: Wed Nov 20 00:04:07 2024 00:39:33.338 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:39:33.338 slat (nsec): min=4139, max=62003, avg=8218.83, stdev=5155.84 00:39:33.338 clat (usec): min=201, max=566, avg=239.67, stdev=33.03 00:39:33.338 lat (usec): min=205, max=588, avg=247.89, stdev=35.54 00:39:33.338 clat percentiles (usec): 00:39:33.338 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 225], 00:39:33.338 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 239], 00:39:33.338 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:39:33.338 | 99.00th=[ 412], 99.50th=[ 457], 99.90th=[ 553], 99.95th=[ 562], 00:39:33.338 | 99.99th=[ 570] 00:39:33.338 write: IOPS=2481, BW=9926KiB/s (10.2MB/s)(9936KiB/1001msec); 0 zone resets 00:39:33.338 slat (nsec): min=5600, max=49624, avg=10702.39, stdev=5079.41 00:39:33.338 clat (usec): min=139, max=1585, avg=182.74, stdev=54.89 00:39:33.338 lat (usec): min=147, max=1595, avg=193.44, stdev=55.18 00:39:33.338 clat percentiles (usec): 00:39:33.338 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:39:33.338 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:39:33.338 | 70.00th=[ 176], 80.00th=[ 196], 90.00th=[ 233], 95.00th=[ 269], 00:39:33.338 | 99.00th=[ 392], 99.50th=[ 445], 99.90th=[ 494], 99.95th=[ 498], 00:39:33.338 | 99.99th=[ 1582] 00:39:33.338 bw ( KiB/s): min= 9784, max= 9784, per=62.43%, avg=9784.00, stdev= 0.00, samples=1 00:39:33.338 iops : min= 2446, max= 2446, avg=2446.00, stdev= 0.00, samples=1 00:39:33.338 lat (usec) : 250=89.61%, 500=10.19%, 750=0.18% 00:39:33.338 lat (msec) : 2=0.02% 00:39:33.338 cpu : usr=2.80%, sys=3.90%, ctx=4534, majf=0, minf=1 00:39:33.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.338 issued rwts: total=2048,2484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:33.338 job2: (groupid=0, jobs=1): err= 0: pid=386962: Wed Nov 20 00:04:07 2024 00:39:33.338 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:39:33.338 slat (nsec): min=7055, max=38119, avg=16321.91, stdev=5498.49 00:39:33.338 clat (usec): min=29274, max=41979, avg=40507.28, stdev=2530.80 00:39:33.338 lat (usec): min=29291, max=41993, avg=40523.60, stdev=2530.67 00:39:33.338 clat percentiles (usec): 00:39:33.338 | 1.00th=[29230], 5.00th=[40109], 10.00th=[41157], 20.00th=[41157], 00:39:33.338 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:33.338 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:39:33.338 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:33.338 | 99.99th=[42206] 00:39:33.338 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:39:33.338 slat (nsec): min=6856, max=37290, avg=9776.73, stdev=3882.75 00:39:33.338 clat (usec): min=165, max=546, avg=248.27, stdev=73.21 00:39:33.338 lat (usec): min=172, max=563, avg=258.05, stdev=75.02 00:39:33.338 clat percentiles (usec): 00:39:33.338 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:39:33.338 | 30.00th=[ 196], 40.00th=[ 206], 50.00th=[ 237], 60.00th=[ 245], 00:39:33.338 | 70.00th=[ 253], 80.00th=[ 285], 90.00th=[ 363], 95.00th=[ 408], 00:39:33.338 | 99.00th=[ 478], 99.50th=[ 523], 99.90th=[ 545], 99.95th=[ 545], 00:39:33.338 | 99.99th=[ 545] 00:39:33.338 bw ( KiB/s): min= 4096, max= 4096, per=26.13%, avg=4096.00, stdev= 0.00, samples=1 00:39:33.338 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:33.338 lat (usec) : 250=65.17%, 500=30.15%, 750=0.56% 00:39:33.338 lat (msec) : 50=4.12% 00:39:33.338 cpu : usr=0.10%, sys=0.59%, ctx=534, majf=0, minf=1 00:39:33.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.338 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:33.338 job3: (groupid=0, jobs=1): err= 0: pid=386963: Wed Nov 20 00:04:07 2024 00:39:33.338 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:39:33.338 slat (nsec): min=8036, max=14994, avg=13634.73, stdev=1804.30 00:39:33.338 clat (usec): min=283, max=42064, avg=39863.20, stdev=8854.72 00:39:33.338 lat (usec): min=296, max=42078, avg=39876.83, stdev=8854.95 00:39:33.338 clat percentiles (usec): 00:39:33.338 | 1.00th=[ 285], 5.00th=[40109], 10.00th=[40633], 20.00th=[41157], 00:39:33.338 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:39:33.338 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:33.338 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:33.338 | 99.99th=[42206] 00:39:33.338 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:39:33.338 slat (nsec): min=7927, max=35632, avg=10671.19, stdev=2867.30 00:39:33.338 clat (usec): min=157, max=1113, avg=264.16, stdev=85.62 00:39:33.338 lat (usec): min=165, max=1129, avg=274.83, stdev=86.18 00:39:33.338 clat percentiles (usec): 00:39:33.338 | 1.00th=[ 169], 5.00th=[ 194], 10.00th=[ 210], 20.00th=[ 223], 00:39:33.338 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 249], 00:39:33.338 | 70.00th=[ 262], 80.00th=[ 293], 90.00th=[ 351], 95.00th=[ 396], 00:39:33.338 | 99.00th=[ 482], 99.50th=[ 889], 99.90th=[ 1106], 99.95th=[ 1106], 00:39:33.338 | 99.99th=[ 1106] 00:39:33.338 bw ( KiB/s): min= 4096, max= 4096, per=26.13%, avg=4096.00, stdev= 0.00, samples=1 00:39:33.338 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:33.338 lat (usec) : 250=59.55%, 500=35.58%, 750=0.19%, 1000=0.37% 00:39:33.338 lat (msec) : 2=0.37%, 50=3.93% 00:39:33.338 cpu : usr=0.59%, sys=0.39%, ctx=535, majf=0, minf=1 00:39:33.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.338 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:33.338 00:39:33.338 Run status group 0 (all jobs): 00:39:33.338 READ: bw=8242KiB/s (8440kB/s), 85.8KiB/s-8184KiB/s (87.8kB/s-8380kB/s), io=8456KiB (8659kB), run=1001-1026msec 00:39:33.338 WRITE: bw=15.3MiB/s (16.0MB/s), 1996KiB/s-9926KiB/s (2044kB/s-10.2MB/s), io=15.7MiB (16.5MB), run=1001-1026msec 00:39:33.338 00:39:33.338 Disk stats (read/write): 00:39:33.338 nvme0n1: ios=70/512, merge=0/0, ticks=949/115, in_queue=1064, util=97.80% 00:39:33.338 nvme0n2: ios=1795/2048, merge=0/0, ticks=1391/373, in_queue=1764, util=97.66% 00:39:33.338 nvme0n3: ios=17/512, merge=0/0, ticks=698/125, in_queue=823, util=88.89% 00:39:33.338 nvme0n4: ios=74/512, merge=0/0, ticks=886/131, in_queue=1017, util=97.67% 00:39:33.338 00:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:33.595 [global] 00:39:33.595 thread=1 00:39:33.595 invalidate=1 00:39:33.595 rw=randwrite 00:39:33.595 time_based=1 00:39:33.595 runtime=1 00:39:33.595 ioengine=libaio 00:39:33.595 direct=1 00:39:33.595 bs=4096 00:39:33.595 iodepth=1 00:39:33.595 norandommap=0 00:39:33.595 numjobs=1 00:39:33.595 00:39:33.595 verify_dump=1 00:39:33.595 verify_backlog=512 00:39:33.595 verify_state_save=0 00:39:33.595 do_verify=1 00:39:33.595 verify=crc32c-intel 00:39:33.595 [job0] 00:39:33.595 filename=/dev/nvme0n1 00:39:33.595 [job1] 00:39:33.595 filename=/dev/nvme0n2 00:39:33.595 [job2] 00:39:33.595 filename=/dev/nvme0n3 00:39:33.595 [job3] 00:39:33.595 filename=/dev/nvme0n4 00:39:33.595 Could not set queue depth (nvme0n1) 00:39:33.595 Could not set queue depth (nvme0n2) 00:39:33.595 Could not set queue depth (nvme0n3) 00:39:33.595 Could not set queue depth (nvme0n4) 00:39:33.595 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:33.595 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:33.595 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:33.596 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:33.596 fio-3.35 00:39:33.596 Starting 4 threads 00:39:34.969 00:39:34.969 job0: (groupid=0, jobs=1): err= 0: pid=387315: Wed Nov 20 00:04:09 2024 00:39:34.969 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:39:34.969 slat (nsec): min=6981, max=28614, avg=17096.10, stdev=6469.19 00:39:34.969 clat (usec): min=40584, max=41028, avg=40956.61, stdev=88.86 00:39:34.969 lat (usec): min=40591, max=41042, avg=40973.71, stdev=91.55 00:39:34.969 clat percentiles (usec): 00:39:34.969 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:34.969 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:34.969 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:34.969 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:34.969 | 99.99th=[41157] 00:39:34.969 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:39:34.969 slat (nsec): min=6938, max=62997, avg=13053.59, stdev=7261.13 00:39:34.970 clat (usec): min=205, max=502, avg=272.89, stdev=42.25 00:39:34.970 lat (usec): min=220, max=553, avg=285.94, stdev=43.82 00:39:34.970 clat percentiles (usec): 00:39:34.970 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 245], 00:39:34.970 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:39:34.970 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 363], 00:39:34.970 | 99.00th=[ 461], 99.50th=[ 490], 99.90th=[ 502], 99.95th=[ 502], 00:39:34.970 | 99.99th=[ 502] 00:39:34.970 bw ( KiB/s): min= 4096, max= 4096, per=29.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:34.970 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:34.970 lat (usec) : 250=28.52%, 500=67.35%, 750=0.19% 00:39:34.970 lat (msec) : 50=3.94% 00:39:34.970 cpu : usr=0.50%, sys=0.79%, ctx=533, majf=0, minf=2 00:39:34.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:34.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:34.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:34.970 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:34.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:34.970 job1: (groupid=0, jobs=1): err= 0: pid=387316: Wed Nov 20 00:04:09 2024 00:39:34.970 read: IOPS=20, BW=83.3KiB/s (85.3kB/s)(84.0KiB/1008msec) 00:39:34.970 slat (nsec): min=8600, max=19581, avg=13798.43, stdev=2044.06 00:39:34.970 clat (usec): min=40780, max=41019, avg=40967.79, stdev=51.05 00:39:34.970 lat (usec): min=40789, max=41032, avg=40981.58, stdev=52.02 00:39:34.970 clat percentiles (usec): 00:39:34.970 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:34.970 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:34.970 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:34.970 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:34.970 | 99.99th=[41157] 00:39:34.970 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:39:34.970 slat (nsec): min=7575, max=57171, avg=14341.77, stdev=7807.69 00:39:34.970 clat (usec): min=199, max=501, avg=268.82, stdev=37.53 00:39:34.970 lat (usec): min=214, max=509, avg=283.16, stdev=37.19 00:39:34.970 clat percentiles (usec): 00:39:34.970 | 1.00th=[ 210], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 243], 00:39:34.970 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 269], 00:39:34.970 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 351], 00:39:34.970 | 99.00th=[ 400], 99.50th=[ 404], 99.90th=[ 502], 99.95th=[ 502], 00:39:34.970 | 99.99th=[ 502] 00:39:34.970 bw ( KiB/s): min= 4096, max= 4096, per=29.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:34.970 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:34.970 lat (usec) : 250=29.46%, 500=66.42%, 750=0.19% 00:39:34.970 lat (msec) : 50=3.94% 00:39:34.970 cpu : usr=0.60%, sys=0.89%, ctx=534, majf=0, minf=1 00:39:34.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:34.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:34.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:34.970 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:34.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:34.970 job2: (groupid=0, jobs=1): err= 0: pid=387317: Wed Nov 20 00:04:09 2024 00:39:34.970 read: IOPS=1779, BW=7117KiB/s (7288kB/s)(7224KiB/1015msec) 00:39:34.970 slat (nsec): min=4334, max=32564, avg=6226.94, stdev=2838.28 00:39:34.970 clat (usec): min=190, max=41006, avg=331.03, stdev=1912.60 00:39:34.970 lat (usec): min=194, max=41022, avg=337.26, stdev=1913.05 00:39:34.970 clat percentiles (usec): 00:39:34.970 | 1.00th=[ 208], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 215], 00:39:34.970 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 241], 00:39:34.970 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 310], 00:39:34.970 | 99.00th=[ 457], 99.50th=[ 523], 99.90th=[41157], 99.95th=[41157], 00:39:34.970 | 99.99th=[41157] 00:39:34.970 write: IOPS=2017, BW=8071KiB/s (8265kB/s)(8192KiB/1015msec); 0 zone resets 00:39:34.970 slat (nsec): min=5958, max=40582, avg=8700.32, stdev=4085.12 00:39:34.970 clat (usec): min=147, max=412, avg=185.56, stdev=44.51 00:39:34.970 lat (usec): min=154, max=441, avg=194.26, stdev=46.46 00:39:34.970 clat percentiles (usec): 00:39:34.970 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 157], 00:39:34.970 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:39:34.970 | 70.00th=[ 182], 80.00th=[ 215], 90.00th=[ 243], 95.00th=[ 260], 00:39:34.970 | 99.00th=[ 388], 99.50th=[ 396], 99.90th=[ 408], 99.95th=[ 412], 00:39:34.970 | 99.99th=[ 412] 00:39:34.970 bw ( KiB/s): min= 7616, max= 8768, per=58.00%, avg=8192.00, stdev=814.59, samples=2 00:39:34.970 iops : min= 1904, max= 2192, avg=2048.00, stdev=203.65, samples=2 00:39:34.970 lat (usec) : 250=84.20%, 500=15.44%, 750=0.26% 00:39:34.970 lat (msec) : 50=0.10% 00:39:34.970 cpu : usr=1.38%, sys=2.96%, ctx=3855, majf=0, minf=1 00:39:34.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:34.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:34.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:34.970 issued rwts: total=1806,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:34.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:34.970 job3: (groupid=0, jobs=1): err= 0: pid=387318: Wed Nov 20 00:04:09 2024 00:39:34.970 read: IOPS=22, BW=90.8KiB/s (93.0kB/s)(92.0KiB/1013msec) 00:39:34.970 slat (nsec): min=7060, max=16972, avg=13504.57, stdev=1777.53 00:39:34.970 clat (usec): min=396, max=41120, avg=39188.96, stdev=8457.28 00:39:34.970 lat (usec): min=411, max=41134, avg=39202.47, stdev=8456.91 00:39:34.970 clat percentiles (usec): 00:39:34.970 | 1.00th=[ 396], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:39:34.970 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:34.970 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:34.970 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:34.970 | 99.99th=[41157] 00:39:34.970 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:39:34.970 slat (nsec): min=5876, max=34125, avg=10765.43, stdev=5357.27 00:39:34.970 clat (usec): min=164, max=290, avg=203.89, stdev=18.07 00:39:34.970 lat (usec): min=170, max=310, avg=214.65, stdev=19.21 00:39:34.970 clat percentiles (usec): 00:39:34.970 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:39:34.970 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:39:34.970 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 225], 95.00th=[ 235], 00:39:34.970 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 289], 00:39:34.970 | 99.99th=[ 289] 00:39:34.970 bw ( KiB/s): min= 4096, max= 4096, per=29.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:34.970 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:34.970 lat (usec) : 250=93.64%, 500=2.24% 00:39:34.970 lat (msec) : 50=4.11% 00:39:34.970 cpu : usr=0.30%, sys=0.49%, ctx=535, majf=0, minf=2 00:39:34.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:34.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:34.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:34.970 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:34.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:34.970 00:39:34.970 Run status group 0 (all jobs): 00:39:34.970 READ: bw=7373KiB/s (7550kB/s), 83.2KiB/s-7117KiB/s (85.2kB/s-7288kB/s), io=7484KiB (7664kB), run=1008-1015msec 00:39:34.970 WRITE: bw=13.8MiB/s (14.5MB/s), 2022KiB/s-8071KiB/s (2070kB/s-8265kB/s), io=14.0MiB (14.7MB), run=1008-1015msec 00:39:34.970 00:39:34.970 Disk stats (read/write): 00:39:34.970 nvme0n1: ios=67/512, merge=0/0, ticks=722/129, in_queue=851, util=86.87% 00:39:34.970 nvme0n2: ios=43/512, merge=0/0, ticks=1681/129, in_queue=1810, util=98.38% 00:39:34.970 nvme0n3: ios=1858/2048, merge=0/0, ticks=1429/371, in_queue=1800, util=98.44% 00:39:34.970 nvme0n4: ios=74/512, merge=0/0, ticks=827/105, in_queue=932, util=91.39% 00:39:34.970 00:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:34.970 [global] 00:39:34.970 thread=1 00:39:34.970 invalidate=1 00:39:34.970 rw=write 00:39:34.970 time_based=1 00:39:34.970 runtime=1 00:39:34.970 ioengine=libaio 00:39:34.970 direct=1 00:39:34.970 bs=4096 00:39:34.970 iodepth=128 00:39:34.970 norandommap=0 00:39:34.970 numjobs=1 00:39:34.970 00:39:34.970 verify_dump=1 00:39:34.970 verify_backlog=512 00:39:34.970 verify_state_save=0 00:39:34.970 do_verify=1 00:39:34.970 verify=crc32c-intel 00:39:34.970 [job0] 00:39:34.970 filename=/dev/nvme0n1 00:39:34.970 [job1] 00:39:34.970 filename=/dev/nvme0n2 00:39:34.970 [job2] 00:39:34.970 filename=/dev/nvme0n3 00:39:34.970 [job3] 00:39:34.970 filename=/dev/nvme0n4 00:39:34.970 Could not set queue depth (nvme0n1) 00:39:34.970 Could not set queue depth (nvme0n2) 00:39:34.970 Could not set queue depth (nvme0n3) 00:39:34.970 Could not set queue depth (nvme0n4) 00:39:35.228 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:35.228 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:35.228 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:35.228 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:35.228 fio-3.35 00:39:35.228 Starting 4 threads 00:39:36.604 00:39:36.604 job0: (groupid=0, jobs=1): err= 0: pid=387542: Wed Nov 20 00:04:10 2024 00:39:36.604 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:39:36.604 slat (usec): min=2, max=18131, avg=92.48, stdev=747.18 00:39:36.604 clat (usec): min=4456, max=49880, avg=12632.00, stdev=4987.35 00:39:36.604 lat (usec): min=4474, max=57983, avg=12724.49, stdev=5055.61 00:39:36.604 clat percentiles (usec): 00:39:36.604 | 1.00th=[ 6587], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9372], 00:39:36.604 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10683], 60.00th=[11076], 00:39:36.604 | 70.00th=[13042], 80.00th=[15795], 90.00th=[19530], 95.00th=[21365], 00:39:36.604 | 99.00th=[30278], 99.50th=[31327], 99.90th=[49021], 99.95th=[49021], 00:39:36.604 | 99.99th=[50070] 00:39:36.604 write: IOPS=5370, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1008msec); 0 zone resets 00:39:36.604 slat (usec): min=3, max=12728, avg=87.39, stdev=665.97 00:39:36.605 clat (msec): min=2, max=104, avg=12.88, stdev=12.38 00:39:36.605 lat (msec): min=2, max=104, avg=12.97, stdev=12.46 00:39:36.605 clat percentiles (msec): 00:39:36.605 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:39:36.605 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:39:36.605 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 15], 95.00th=[ 21], 00:39:36.605 | 99.00th=[ 88], 99.50th=[ 94], 99.90th=[ 106], 99.95th=[ 106], 00:39:36.605 | 99.99th=[ 106] 00:39:36.605 bw ( KiB/s): min=17704, max=24576, per=33.01%, avg=21140.00, stdev=4859.24, samples=2 00:39:36.605 iops : min= 4426, max= 6144, avg=5285.00, stdev=1214.81, samples=2 00:39:36.605 lat (msec) : 4=0.15%, 10=35.72%, 20=56.47%, 50=5.91%, 100=1.68% 00:39:36.605 lat (msec) : 250=0.07% 00:39:36.605 cpu : usr=6.26%, sys=12.81%, ctx=342, majf=0, minf=1 00:39:36.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:36.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:36.605 issued rwts: total=4608,5413,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:36.605 job1: (groupid=0, jobs=1): err= 0: pid=387543: Wed Nov 20 00:04:10 2024 00:39:36.605 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:39:36.605 slat (usec): min=2, max=24690, avg=127.50, stdev=1096.85 00:39:36.605 clat (msec): min=4, max=104, avg=17.32, stdev=11.18 00:39:36.605 lat (msec): min=4, max=104, avg=17.44, stdev=11.23 00:39:36.605 clat percentiles (msec): 00:39:36.605 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:39:36.605 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:39:36.605 | 70.00th=[ 19], 80.00th=[ 22], 90.00th=[ 31], 95.00th=[ 42], 00:39:36.605 | 99.00th=[ 45], 99.50th=[ 101], 99.90th=[ 105], 99.95th=[ 105], 00:39:36.605 | 99.99th=[ 105] 00:39:36.605 write: IOPS=3807, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1010msec); 0 zone resets 00:39:36.605 slat (usec): min=3, max=26357, avg=130.56, stdev=1012.87 00:39:36.605 clat (usec): min=4234, max=69496, avg=16734.34, stdev=10422.65 00:39:36.605 lat (usec): min=4239, max=69508, avg=16864.90, stdev=10490.15 00:39:36.605 clat percentiles (usec): 00:39:36.605 | 1.00th=[ 5866], 5.00th=[ 6718], 10.00th=[10552], 20.00th=[11076], 00:39:36.605 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12649], 60.00th=[13173], 00:39:36.605 | 70.00th=[15795], 80.00th=[22676], 90.00th=[30278], 95.00th=[41681], 00:39:36.605 | 99.00th=[62129], 99.50th=[62129], 99.90th=[69731], 99.95th=[69731], 00:39:36.605 | 99.99th=[69731] 00:39:36.605 bw ( KiB/s): min=14520, max=15232, per=23.23%, avg=14876.00, stdev=503.46, samples=2 00:39:36.605 iops : min= 3630, max= 3808, avg=3719.00, stdev=125.87, samples=2 00:39:36.605 lat (msec) : 10=9.80%, 20=66.31%, 50=22.25%, 100=1.55%, 250=0.09% 00:39:36.605 cpu : usr=2.78%, sys=4.96%, ctx=298, majf=0, minf=2 00:39:36.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:36.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:36.605 issued rwts: total=3584,3846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:36.605 job2: (groupid=0, jobs=1): err= 0: pid=387544: Wed Nov 20 00:04:10 2024 00:39:36.605 read: IOPS=4099, BW=16.0MiB/s (16.8MB/s)(16.7MiB/1044msec) 00:39:36.605 slat (usec): min=4, max=16581, avg=118.22, stdev=950.82 00:39:36.605 clat (usec): min=8310, max=61509, avg=16801.28, stdev=7925.57 00:39:36.605 lat (usec): min=8325, max=61515, avg=16919.50, stdev=7981.01 00:39:36.605 clat percentiles (usec): 00:39:36.605 | 1.00th=[ 8848], 5.00th=[10814], 10.00th=[11207], 20.00th=[11863], 00:39:36.605 | 30.00th=[12780], 40.00th=[13173], 50.00th=[14746], 60.00th=[16188], 00:39:36.605 | 70.00th=[18220], 80.00th=[19268], 90.00th=[22676], 95.00th=[30016], 00:39:36.605 | 99.00th=[54264], 99.50th=[61080], 99.90th=[61604], 99.95th=[61604], 00:39:36.605 | 99.99th=[61604] 00:39:36.605 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:39:36.605 slat (usec): min=3, max=17609, avg=96.54, stdev=843.45 00:39:36.605 clat (usec): min=925, max=32337, avg=13140.93, stdev=3658.46 00:39:36.605 lat (usec): min=933, max=32384, avg=13237.48, stdev=3736.77 00:39:36.605 clat percentiles (usec): 00:39:36.605 | 1.00th=[ 6587], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[10683], 00:39:36.605 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:39:36.605 | 70.00th=[13960], 80.00th=[14746], 90.00th=[17695], 95.00th=[21627], 00:39:36.605 | 99.00th=[25035], 99.50th=[26084], 99.90th=[26608], 99.95th=[29492], 00:39:36.605 | 99.99th=[32375] 00:39:36.605 bw ( KiB/s): min=16560, max=20304, per=28.78%, avg=18432.00, stdev=2647.41, samples=2 00:39:36.605 iops : min= 4140, max= 5076, avg=4608.00, stdev=661.85, samples=2 00:39:36.605 lat (usec) : 1000=0.03% 00:39:36.605 lat (msec) : 4=0.14%, 10=8.21%, 20=79.49%, 50=11.35%, 100=0.78% 00:39:36.605 cpu : usr=6.04%, sys=9.49%, ctx=159, majf=0, minf=1 00:39:36.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:36.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:36.605 issued rwts: total=4280,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:36.605 job3: (groupid=0, jobs=1): err= 0: pid=387545: Wed Nov 20 00:04:10 2024 00:39:36.605 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:39:36.605 slat (usec): min=2, max=26906, avg=165.07, stdev=1399.06 00:39:36.605 clat (usec): min=10349, max=90669, avg=23828.69, stdev=13500.38 00:39:36.605 lat (usec): min=10359, max=99257, avg=23993.77, stdev=13632.71 00:39:36.605 clat percentiles (usec): 00:39:36.605 | 1.00th=[10421], 5.00th=[11863], 10.00th=[12256], 20.00th=[12649], 00:39:36.605 | 30.00th=[13566], 40.00th=[15139], 50.00th=[17171], 60.00th=[21365], 00:39:36.605 | 70.00th=[30802], 80.00th=[37487], 90.00th=[44827], 95.00th=[47449], 00:39:36.605 | 99.00th=[65799], 99.50th=[78119], 99.90th=[82314], 99.95th=[82314], 00:39:36.605 | 99.99th=[90702] 00:39:36.605 write: IOPS=2823, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1009msec); 0 zone resets 00:39:36.605 slat (usec): min=3, max=18687, avg=198.71, stdev=1332.88 00:39:36.605 clat (usec): min=630, max=155289, avg=23505.22, stdev=26208.36 00:39:36.605 lat (usec): min=786, max=155311, avg=23703.93, stdev=26394.94 00:39:36.605 clat percentiles (msec): 00:39:36.605 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 13], 00:39:36.605 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 16], 00:39:36.605 | 70.00th=[ 18], 80.00th=[ 25], 90.00th=[ 35], 95.00th=[ 99], 00:39:36.605 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 157], 00:39:36.605 | 99.99th=[ 157] 00:39:36.605 bw ( KiB/s): min= 8952, max=12816, per=16.99%, avg=10884.00, stdev=2732.26, samples=2 00:39:36.605 iops : min= 2238, max= 3204, avg=2721.00, stdev=683.07, samples=2 00:39:36.605 lat (usec) : 750=0.02%, 1000=0.04% 00:39:36.605 lat (msec) : 10=3.35%, 20=63.15%, 50=28.69%, 100=2.26%, 250=2.50% 00:39:36.605 cpu : usr=2.28%, sys=3.77%, ctx=165, majf=0, minf=1 00:39:36.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:39:36.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:36.605 issued rwts: total=2560,2849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:36.605 00:39:36.605 Run status group 0 (all jobs): 00:39:36.605 READ: bw=56.2MiB/s (59.0MB/s), 9.91MiB/s-17.9MiB/s (10.4MB/s-18.7MB/s), io=58.7MiB (61.6MB), run=1008-1044msec 00:39:36.605 WRITE: bw=62.5MiB/s (65.6MB/s), 11.0MiB/s-21.0MiB/s (11.6MB/s-22.0MB/s), io=65.3MiB (68.5MB), run=1008-1044msec 00:39:36.605 00:39:36.605 Disk stats (read/write): 00:39:36.605 nvme0n1: ios=4409/5120, merge=0/0, ticks=48302/50468, in_queue=98770, util=98.40% 00:39:36.605 nvme0n2: ios=2919/3072, merge=0/0, ticks=33722/25979, in_queue=59701, util=98.27% 00:39:36.605 nvme0n3: ios=3642/3710, merge=0/0, ticks=55827/47696, in_queue=103523, util=98.54% 00:39:36.605 nvme0n4: ios=1995/2048, merge=0/0, ticks=26141/30914, in_queue=57055, util=97.49% 00:39:36.605 00:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:36.605 [global] 00:39:36.605 thread=1 00:39:36.605 invalidate=1 00:39:36.605 rw=randwrite 00:39:36.605 time_based=1 00:39:36.605 runtime=1 00:39:36.605 ioengine=libaio 00:39:36.605 direct=1 00:39:36.605 bs=4096 00:39:36.605 iodepth=128 00:39:36.605 norandommap=0 00:39:36.605 numjobs=1 00:39:36.605 00:39:36.605 verify_dump=1 00:39:36.605 verify_backlog=512 00:39:36.605 verify_state_save=0 00:39:36.605 do_verify=1 00:39:36.605 verify=crc32c-intel 00:39:36.605 [job0] 00:39:36.605 filename=/dev/nvme0n1 00:39:36.605 [job1] 00:39:36.605 filename=/dev/nvme0n2 00:39:36.605 [job2] 00:39:36.605 filename=/dev/nvme0n3 00:39:36.605 [job3] 00:39:36.605 filename=/dev/nvme0n4 00:39:36.605 Could not set queue depth (nvme0n1) 00:39:36.605 Could not set queue depth (nvme0n2) 00:39:36.605 Could not set queue depth (nvme0n3) 00:39:36.605 Could not set queue depth (nvme0n4) 00:39:36.605 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:36.605 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:36.606 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:36.606 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:36.606 fio-3.35 00:39:36.606 Starting 4 threads 00:39:37.979 00:39:37.979 job0: (groupid=0, jobs=1): err= 0: pid=387776: Wed Nov 20 00:04:12 2024 00:39:37.979 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:39:37.979 slat (usec): min=2, max=17539, avg=97.52, stdev=850.92 00:39:37.979 clat (usec): min=731, max=57697, avg=14153.79, stdev=5534.52 00:39:37.979 lat (usec): min=744, max=57702, avg=14251.31, stdev=5606.54 00:39:37.979 clat percentiles (usec): 00:39:37.979 | 1.00th=[ 2704], 5.00th=[ 3064], 10.00th=[ 7701], 20.00th=[10290], 00:39:37.979 | 30.00th=[12125], 40.00th=[13173], 50.00th=[13960], 60.00th=[14877], 00:39:37.979 | 70.00th=[16581], 80.00th=[17433], 90.00th=[19268], 95.00th=[25035], 00:39:37.979 | 99.00th=[28181], 99.50th=[28443], 99.90th=[56886], 99.95th=[56886], 00:39:37.979 | 99.99th=[57934] 00:39:37.979 write: IOPS=3953, BW=15.4MiB/s (16.2MB/s)(15.6MiB/1011msec); 0 zone resets 00:39:37.979 slat (usec): min=3, max=31796, avg=126.35, stdev=1008.65 00:39:37.979 clat (usec): min=1978, max=68688, avg=19410.21, stdev=12943.15 00:39:37.979 lat (usec): min=1987, max=68698, avg=19536.56, stdev=13021.16 00:39:37.980 clat percentiles (usec): 00:39:37.980 | 1.00th=[ 3916], 5.00th=[ 6456], 10.00th=[ 9372], 20.00th=[10421], 00:39:37.980 | 30.00th=[12125], 40.00th=[13698], 50.00th=[14484], 60.00th=[16450], 00:39:37.980 | 70.00th=[19792], 80.00th=[25560], 90.00th=[41157], 95.00th=[51643], 00:39:37.980 | 99.00th=[58983], 99.50th=[61080], 99.90th=[68682], 99.95th=[68682], 00:39:37.980 | 99.99th=[68682] 00:39:37.980 bw ( KiB/s): min=14336, max=16624, per=23.12%, avg=15480.00, stdev=1617.86, samples=2 00:39:37.980 iops : min= 3584, max= 4156, avg=3870.00, stdev=404.47, samples=2 00:39:37.980 lat (usec) : 750=0.01% 00:39:37.980 lat (msec) : 2=0.08%, 4=3.84%, 10=10.72%, 20=66.42%, 50=15.62% 00:39:37.980 lat (msec) : 100=3.31% 00:39:37.980 cpu : usr=2.97%, sys=5.25%, ctx=248, majf=0, minf=1 00:39:37.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:37.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:37.980 issued rwts: total=3584,3997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:37.980 job1: (groupid=0, jobs=1): err= 0: pid=387777: Wed Nov 20 00:04:12 2024 00:39:37.980 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:39:37.980 slat (usec): min=2, max=30946, avg=114.77, stdev=992.56 00:39:37.980 clat (usec): min=3295, max=55548, avg=15128.32, stdev=7486.07 00:39:37.980 lat (usec): min=3301, max=61423, avg=15243.09, stdev=7571.05 00:39:37.980 clat percentiles (usec): 00:39:37.980 | 1.00th=[ 6259], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10683], 00:39:37.980 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12911], 00:39:37.980 | 70.00th=[13435], 80.00th=[19530], 90.00th=[29492], 95.00th=[33162], 00:39:37.980 | 99.00th=[35914], 99.50th=[35914], 99.90th=[52691], 99.95th=[53740], 00:39:37.980 | 99.99th=[55313] 00:39:37.980 write: IOPS=4368, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1004msec); 0 zone resets 00:39:37.980 slat (usec): min=3, max=19978, avg=109.44, stdev=825.09 00:39:37.980 clat (usec): min=2730, max=55047, avg=14902.07, stdev=7867.23 00:39:37.980 lat (usec): min=2739, max=56383, avg=15011.50, stdev=7926.77 00:39:37.980 clat percentiles (usec): 00:39:37.980 | 1.00th=[ 4490], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[10683], 00:39:37.980 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:39:37.980 | 70.00th=[13042], 80.00th=[19268], 90.00th=[23987], 95.00th=[30278], 00:39:37.980 | 99.00th=[49021], 99.50th=[52167], 99.90th=[54789], 99.95th=[54789], 00:39:37.980 | 99.99th=[54789] 00:39:37.980 bw ( KiB/s): min=14164, max=19936, per=25.47%, avg=17050.00, stdev=4081.42, samples=2 00:39:37.980 iops : min= 3541, max= 4984, avg=4262.50, stdev=1020.36, samples=2 00:39:37.980 lat (msec) : 4=0.29%, 10=9.90%, 20=72.81%, 50=16.55%, 100=0.44% 00:39:37.980 cpu : usr=4.99%, sys=7.28%, ctx=344, majf=0, minf=1 00:39:37.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:37.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:37.980 issued rwts: total=4096,4386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:37.980 job2: (groupid=0, jobs=1): err= 0: pid=387778: Wed Nov 20 00:04:12 2024 00:39:37.980 read: IOPS=3359, BW=13.1MiB/s (13.8MB/s)(13.3MiB/1010msec) 00:39:37.980 slat (usec): min=3, max=26623, avg=105.74, stdev=882.15 00:39:37.980 clat (usec): min=1185, max=49801, avg=14069.39, stdev=7250.87 00:39:37.980 lat (usec): min=1200, max=49806, avg=14175.13, stdev=7308.51 00:39:37.980 clat percentiles (usec): 00:39:37.980 | 1.00th=[ 1958], 5.00th=[ 2868], 10.00th=[ 5080], 20.00th=[10552], 00:39:37.980 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12387], 60.00th=[13304], 00:39:37.980 | 70.00th=[14746], 80.00th=[18744], 90.00th=[21890], 95.00th=[30016], 00:39:37.980 | 99.00th=[37487], 99.50th=[38011], 99.90th=[49546], 99.95th=[49546], 00:39:37.980 | 99.99th=[49546] 00:39:37.980 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:39:37.980 slat (usec): min=4, max=19501, avg=118.65, stdev=812.14 00:39:37.980 clat (usec): min=344, max=175828, avg=17442.54, stdev=24972.58 00:39:37.980 lat (usec): min=354, max=175836, avg=17561.19, stdev=25088.51 00:39:37.980 clat percentiles (msec): 00:39:37.980 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:39:37.980 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 14], 00:39:37.980 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 22], 95.00th=[ 44], 00:39:37.980 | 99.00th=[ 171], 99.50th=[ 176], 99.90th=[ 176], 99.95th=[ 176], 00:39:37.980 | 99.99th=[ 176] 00:39:37.980 bw ( KiB/s): min=15744, max=21120, per=27.53%, avg=18432.00, stdev=3801.41, samples=2 00:39:37.980 iops : min= 3936, max= 5280, avg=4608.00, stdev=950.35, samples=2 00:39:37.980 lat (usec) : 500=0.06%, 750=0.02%, 1000=0.10% 00:39:37.980 lat (msec) : 2=0.75%, 4=4.99%, 10=18.01%, 20=63.05%, 50=10.52% 00:39:37.980 lat (msec) : 100=1.11%, 250=1.37% 00:39:37.980 cpu : usr=4.96%, sys=7.63%, ctx=436, majf=0, minf=8 00:39:37.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:37.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:37.980 issued rwts: total=3393,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:37.980 job3: (groupid=0, jobs=1): err= 0: pid=387779: Wed Nov 20 00:04:12 2024 00:39:37.980 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:39:37.980 slat (usec): min=2, max=24864, avg=134.40, stdev=1044.81 00:39:37.980 clat (usec): min=6999, max=69252, avg=17425.56, stdev=11493.66 00:39:37.980 lat (usec): min=7003, max=69265, avg=17559.96, stdev=11592.30 00:39:37.980 clat percentiles (usec): 00:39:37.980 | 1.00th=[ 7111], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[11338], 00:39:37.980 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13566], 60.00th=[14091], 00:39:37.980 | 70.00th=[14746], 80.00th=[17695], 90.00th=[39060], 95.00th=[44827], 00:39:37.980 | 99.00th=[56886], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:39:37.980 | 99.99th=[69731] 00:39:37.980 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1003msec); 0 zone resets 00:39:37.980 slat (usec): min=3, max=21364, avg=124.54, stdev=1051.51 00:39:37.980 clat (usec): min=2210, max=48633, avg=16444.36, stdev=7789.40 00:39:37.980 lat (usec): min=2216, max=48650, avg=16568.90, stdev=7862.00 00:39:37.980 clat percentiles (usec): 00:39:37.980 | 1.00th=[ 3818], 5.00th=[ 6980], 10.00th=[ 9896], 20.00th=[11863], 00:39:37.980 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13304], 60.00th=[14091], 00:39:37.980 | 70.00th=[16057], 80.00th=[22414], 90.00th=[29754], 95.00th=[32900], 00:39:37.980 | 99.00th=[35390], 99.50th=[45351], 99.90th=[45351], 99.95th=[47973], 00:39:37.980 | 99.99th=[48497] 00:39:37.980 bw ( KiB/s): min= 9952, max=20480, per=22.73%, avg=15216.00, stdev=7444.42, samples=2 00:39:37.980 iops : min= 2488, max= 5120, avg=3804.00, stdev=1861.11, samples=2 00:39:37.980 lat (msec) : 4=0.55%, 10=9.58%, 20=68.60%, 50=20.09%, 100=1.18% 00:39:37.980 cpu : usr=3.19%, sys=4.39%, ctx=237, majf=0, minf=1 00:39:37.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:37.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:37.980 issued rwts: total=3584,3931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:37.980 00:39:37.980 Run status group 0 (all jobs): 00:39:37.980 READ: bw=56.6MiB/s (59.4MB/s), 13.1MiB/s-15.9MiB/s (13.8MB/s-16.7MB/s), io=57.3MiB (60.0MB), run=1003-1011msec 00:39:37.980 WRITE: bw=65.4MiB/s (68.6MB/s), 15.3MiB/s-17.8MiB/s (16.1MB/s-18.7MB/s), io=66.1MiB (69.3MB), run=1003-1011msec 00:39:37.980 00:39:37.980 Disk stats (read/write): 00:39:37.980 nvme0n1: ios=3392/3584, merge=0/0, ticks=46527/53730, in_queue=100257, util=98.10% 00:39:37.980 nvme0n2: ios=3408/3584, merge=0/0, ticks=39446/37491, in_queue=76937, util=98.07% 00:39:37.980 nvme0n3: ios=2610/3751, merge=0/0, ticks=31548/60955, in_queue=92503, util=99.48% 00:39:37.980 nvme0n4: ios=3072/3242, merge=0/0, ticks=33358/32549, in_queue=65907, util=89.71% 00:39:37.980 00:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:37.980 00:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=387911 00:39:37.980 00:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:37.980 00:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:37.980 [global] 00:39:37.980 thread=1 00:39:37.980 invalidate=1 00:39:37.980 rw=read 00:39:37.980 time_based=1 00:39:37.980 runtime=10 00:39:37.980 ioengine=libaio 00:39:37.980 direct=1 00:39:37.980 bs=4096 00:39:37.980 iodepth=1 00:39:37.980 norandommap=1 00:39:37.980 numjobs=1 00:39:37.980 00:39:37.980 [job0] 00:39:37.980 filename=/dev/nvme0n1 00:39:37.980 [job1] 00:39:37.980 filename=/dev/nvme0n2 00:39:37.980 [job2] 00:39:37.980 filename=/dev/nvme0n3 00:39:37.980 [job3] 00:39:37.980 filename=/dev/nvme0n4 00:39:37.980 Could not set queue depth (nvme0n1) 00:39:37.980 Could not set queue depth (nvme0n2) 00:39:37.980 Could not set queue depth (nvme0n3) 00:39:37.980 Could not set queue depth (nvme0n4) 00:39:37.980 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.980 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.980 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.980 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.980 fio-3.35 00:39:37.980 Starting 4 threads 00:39:41.258 00:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:41.258 00:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:41.258 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10002432, buflen=4096 00:39:41.258 fio: pid=388070, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:41.515 00:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:41.515 00:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:41.515 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1196032, buflen=4096 00:39:41.515 fio: pid=388059, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:41.773 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=425984, buflen=4096 00:39:41.773 fio: pid=388015, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:39:41.773 00:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:41.773 00:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:42.032 00:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:42.032 00:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:42.032 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=638976, buflen=4096 00:39:42.032 fio: pid=388024, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:42.032 00:39:42.032 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=388015: Wed Nov 20 00:04:16 2024 00:39:42.032 read: IOPS=30, BW=120KiB/s (123kB/s)(416KiB/3461msec) 00:39:42.032 slat (usec): min=8, max=7824, avg=96.79, stdev=761.47 00:39:42.032 clat (usec): min=283, max=52345, avg=33168.00, stdev=16609.46 00:39:42.032 lat (usec): min=299, max=52362, avg=33190.48, stdev=16605.92 00:39:42.032 clat percentiles (usec): 00:39:42.032 | 1.00th=[ 347], 5.00th=[ 355], 10.00th=[ 388], 20.00th=[ 758], 00:39:42.032 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:42.032 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:42.032 | 99.00th=[44827], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:39:42.032 | 99.99th=[52167] 00:39:42.032 bw ( KiB/s): min= 104, max= 184, per=3.85%, avg=122.67, stdev=31.46, samples=6 00:39:42.032 iops : min= 26, max= 46, avg=30.67, stdev= 7.87, samples=6 00:39:42.032 lat (usec) : 500=19.05%, 1000=0.95% 00:39:42.032 lat (msec) : 50=78.10%, 100=0.95% 00:39:42.032 cpu : usr=0.00%, sys=0.32%, ctx=106, majf=0, minf=1 00:39:42.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:42.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.032 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.032 issued rwts: total=105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:42.032 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=388024: Wed Nov 20 00:04:16 2024 00:39:42.032 read: IOPS=41, BW=165KiB/s (169kB/s)(624KiB/3775msec) 00:39:42.032 slat (usec): min=5, max=16888, avg=274.06, stdev=1892.10 00:39:42.032 clat (usec): min=271, max=41131, avg=23772.40, stdev=20137.91 00:39:42.032 lat (usec): min=288, max=58019, avg=24048.00, stdev=20454.31 00:39:42.032 clat percentiles (usec): 00:39:42.032 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 306], 00:39:42.032 | 30.00th=[ 351], 40.00th=[ 449], 50.00th=[41157], 60.00th=[41157], 00:39:42.032 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:42.032 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:42.032 | 99.99th=[41157] 00:39:42.032 bw ( KiB/s): min= 94, max= 472, per=5.39%, avg=171.14, stdev=137.50, samples=7 00:39:42.032 iops : min= 23, max= 118, avg=42.71, stdev=34.42, samples=7 00:39:42.032 lat (usec) : 500=41.40%, 750=0.64% 00:39:42.032 lat (msec) : 50=57.32% 00:39:42.032 cpu : usr=0.05%, sys=0.05%, ctx=160, majf=0, minf=2 00:39:42.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:42.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.032 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.032 issued rwts: total=157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:42.032 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=388059: Wed Nov 20 00:04:16 2024 00:39:42.032 read: IOPS=92, BW=368KiB/s (377kB/s)(1168KiB/3173msec) 00:39:42.032 slat (nsec): min=5798, max=53832, avg=14644.86, stdev=7818.25 00:39:42.032 clat (usec): min=292, max=41378, avg=10769.49, stdev=17794.60 00:39:42.032 lat (usec): min=299, max=41397, avg=10784.14, stdev=17797.25 00:39:42.032 clat percentiles (usec): 00:39:42.032 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 297], 20.00th=[ 306], 00:39:42.032 | 30.00th=[ 310], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:39:42.032 | 70.00th=[ 367], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:42.032 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:42.032 | 99.99th=[41157] 00:39:42.032 bw ( KiB/s): min= 96, max= 1816, per=12.10%, avg=384.00, stdev=701.54, samples=6 00:39:42.032 iops : min= 24, max= 454, avg=96.00, stdev=175.39, samples=6 00:39:42.032 lat (usec) : 500=73.72%, 750=0.34% 00:39:42.032 lat (msec) : 50=25.60% 00:39:42.032 cpu : usr=0.00%, sys=0.19%, ctx=293, majf=0, minf=1 00:39:42.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:42.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.032 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.032 issued rwts: total=293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:42.032 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=388070: Wed Nov 20 00:04:16 2024 00:39:42.032 read: IOPS=844, BW=3378KiB/s (3459kB/s)(9768KiB/2892msec) 00:39:42.032 slat (nsec): min=4606, max=47695, avg=9603.07, stdev=6112.94 00:39:42.032 clat (usec): min=200, max=41046, avg=1163.01, stdev=5986.52 00:39:42.032 lat (usec): min=206, max=41064, avg=1172.61, stdev=5988.29 00:39:42.032 clat percentiles (usec): 00:39:42.032 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 245], 00:39:42.032 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:39:42.032 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 347], 00:39:42.032 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:42.032 | 99.99th=[41157] 00:39:42.032 bw ( KiB/s): min= 96, max= 8088, per=53.46%, avg=1696.00, stdev=3573.24, samples=5 00:39:42.032 iops : min= 24, max= 2022, avg=424.00, stdev=893.31, samples=5 00:39:42.032 lat (usec) : 250=41.59%, 500=56.04%, 750=0.12% 00:39:42.032 lat (msec) : 50=2.21% 00:39:42.032 cpu : usr=0.38%, sys=0.83%, ctx=2443, majf=0, minf=2 00:39:42.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:42.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.032 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.032 issued rwts: total=2443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:42.032 00:39:42.032 Run status group 0 (all jobs): 00:39:42.032 READ: bw=3172KiB/s (3249kB/s), 120KiB/s-3378KiB/s (123kB/s-3459kB/s), io=11.7MiB (12.3MB), run=2892-3775msec 00:39:42.032 00:39:42.032 Disk stats (read/write): 00:39:42.032 nvme0n1: ios=119/0, merge=0/0, ticks=3376/0, in_queue=3376, util=96.60% 00:39:42.032 nvme0n2: ios=152/0, merge=0/0, ticks=3545/0, in_queue=3545, util=95.87% 00:39:42.032 nvme0n3: ios=290/0, merge=0/0, ticks=3063/0, in_queue=3063, util=96.72% 00:39:42.032 nvme0n4: ios=2386/0, merge=0/0, ticks=2808/0, in_queue=2808, util=96.78% 00:39:42.291 00:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:42.291 00:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:42.548 00:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:42.548 00:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:42.806 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:42.806 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:43.063 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:43.063 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:43.320 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:43.320 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 387911 00:39:43.320 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:43.320 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:43.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:43.580 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:43.580 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:39:43.580 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:43.580 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:43.580 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:43.580 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:43.580 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:39:43.580 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:43.580 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:43.580 nvmf hotplug test: fio failed as expected 00:39:43.580 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:43.838 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:43.838 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:43.838 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:43.838 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:43.838 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:43.838 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:43.838 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:43.838 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:43.838 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:43.838 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:43.838 00:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:43.838 rmmod nvme_tcp 00:39:43.838 rmmod nvme_fabrics 00:39:43.838 rmmod nvme_keyring 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 386025 ']' 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 386025 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 386025 ']' 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 386025 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386025 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386025' 00:39:43.838 killing process with pid 386025 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 386025 00:39:43.838 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 386025 00:39:44.096 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:44.096 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:44.096 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:44.096 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:44.096 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:39:44.096 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:44.096 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:39:44.096 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:44.096 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:44.096 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:44.096 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:44.096 00:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:46.092 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:46.092 00:39:46.092 real 0m23.630s 00:39:46.092 user 1m7.929s 00:39:46.092 sys 0m8.872s 00:39:46.092 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:46.092 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:46.092 ************************************ 00:39:46.092 END TEST nvmf_fio_target 00:39:46.092 ************************************ 00:39:46.092 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:46.092 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:46.092 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:46.092 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:46.092 ************************************ 00:39:46.092 START TEST nvmf_bdevio 00:39:46.092 ************************************ 00:39:46.092 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:46.351 * Looking for test storage... 00:39:46.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:46.351 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:46.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:46.352 --rc genhtml_branch_coverage=1 00:39:46.352 --rc genhtml_function_coverage=1 00:39:46.352 --rc genhtml_legend=1 00:39:46.352 --rc geninfo_all_blocks=1 00:39:46.352 --rc geninfo_unexecuted_blocks=1 00:39:46.352 00:39:46.352 ' 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:46.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:46.352 --rc genhtml_branch_coverage=1 00:39:46.352 --rc genhtml_function_coverage=1 00:39:46.352 --rc genhtml_legend=1 00:39:46.352 --rc geninfo_all_blocks=1 00:39:46.352 --rc geninfo_unexecuted_blocks=1 00:39:46.352 00:39:46.352 ' 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:46.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:46.352 --rc genhtml_branch_coverage=1 00:39:46.352 --rc genhtml_function_coverage=1 00:39:46.352 --rc genhtml_legend=1 00:39:46.352 --rc geninfo_all_blocks=1 00:39:46.352 --rc geninfo_unexecuted_blocks=1 00:39:46.352 00:39:46.352 ' 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:46.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:46.352 --rc genhtml_branch_coverage=1 00:39:46.352 --rc genhtml_function_coverage=1 00:39:46.352 --rc genhtml_legend=1 00:39:46.352 --rc geninfo_all_blocks=1 00:39:46.352 --rc geninfo_unexecuted_blocks=1 00:39:46.352 00:39:46.352 ' 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:46.352 00:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:48.886 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:48.886 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:48.886 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:48.886 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:48.886 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:48.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:48.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:39:48.887 00:39:48.887 --- 10.0.0.2 ping statistics --- 00:39:48.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:48.887 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:48.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:48.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:39:48.887 00:39:48.887 --- 10.0.0.1 ping statistics --- 00:39:48.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:48.887 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=390746 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 390746 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 390746 ']' 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:48.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:48.887 00:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:48.887 [2024-11-20 00:04:22.826132] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:48.887 [2024-11-20 00:04:22.827215] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:39:48.887 [2024-11-20 00:04:22.827271] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:48.887 [2024-11-20 00:04:22.898612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:48.887 [2024-11-20 00:04:22.946169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:48.887 [2024-11-20 00:04:22.946230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:48.887 [2024-11-20 00:04:22.946244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:48.887 [2024-11-20 00:04:22.946256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:48.887 [2024-11-20 00:04:22.946266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:48.887 [2024-11-20 00:04:22.947942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:48.887 [2024-11-20 00:04:22.947970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:48.887 [2024-11-20 00:04:22.948028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:48.887 [2024-11-20 00:04:22.948030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:48.887 [2024-11-20 00:04:23.040787] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:48.887 [2024-11-20 00:04:23.040985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:48.887 [2024-11-20 00:04:23.041311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:48.887 [2024-11-20 00:04:23.041919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:48.887 [2024-11-20 00:04:23.042197] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:48.887 [2024-11-20 00:04:23.092893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:48.887 Malloc0 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.887 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:48.888 [2024-11-20 00:04:23.161085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:48.888 { 00:39:48.888 "params": { 00:39:48.888 "name": "Nvme$subsystem", 00:39:48.888 "trtype": "$TEST_TRANSPORT", 00:39:48.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.888 "adrfam": "ipv4", 00:39:48.888 "trsvcid": "$NVMF_PORT", 00:39:48.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.888 "hdgst": ${hdgst:-false}, 00:39:48.888 "ddgst": ${ddgst:-false} 00:39:48.888 }, 00:39:48.888 "method": "bdev_nvme_attach_controller" 00:39:48.888 } 00:39:48.888 EOF 00:39:48.888 )") 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:39:48.888 00:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:48.888 "params": { 00:39:48.888 "name": "Nvme1", 00:39:48.888 "trtype": "tcp", 00:39:48.888 "traddr": "10.0.0.2", 00:39:48.888 "adrfam": "ipv4", 00:39:48.888 "trsvcid": "4420", 00:39:48.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:48.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:48.888 "hdgst": false, 00:39:48.888 "ddgst": false 00:39:48.888 }, 00:39:48.888 "method": "bdev_nvme_attach_controller" 00:39:48.888 }' 00:39:49.147 [2024-11-20 00:04:23.211235] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:39:49.147 [2024-11-20 00:04:23.211304] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390777 ] 00:39:49.147 [2024-11-20 00:04:23.279839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:49.147 [2024-11-20 00:04:23.332098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:49.147 [2024-11-20 00:04:23.332126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:49.147 [2024-11-20 00:04:23.332129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.405 I/O targets: 00:39:49.405 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:49.405 00:39:49.405 00:39:49.405 CUnit - A unit testing framework for C - Version 2.1-3 00:39:49.405 http://cunit.sourceforge.net/ 00:39:49.405 00:39:49.405 00:39:49.405 Suite: bdevio tests on: Nvme1n1 00:39:49.405 Test: blockdev write read block ...passed 00:39:49.405 Test: blockdev write zeroes read block ...passed 00:39:49.405 Test: blockdev write zeroes read no split ...passed 00:39:49.662 Test: blockdev write zeroes read split ...passed 00:39:49.662 Test: blockdev write zeroes read split partial ...passed 00:39:49.662 Test: blockdev reset ...[2024-11-20 00:04:23.735990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:39:49.662 [2024-11-20 00:04:23.736101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af0b70 (9): Bad file descriptor 00:39:49.663 [2024-11-20 00:04:23.740897] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:39:49.663 passed 00:39:49.663 Test: blockdev write read 8 blocks ...passed 00:39:49.663 Test: blockdev write read size > 128k ...passed 00:39:49.663 Test: blockdev write read invalid size ...passed 00:39:49.663 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:49.663 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:49.663 Test: blockdev write read max offset ...passed 00:39:49.663 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:49.663 Test: blockdev writev readv 8 blocks ...passed 00:39:49.921 Test: blockdev writev readv 30 x 1block ...passed 00:39:49.921 Test: blockdev writev readv block ...passed 00:39:49.921 Test: blockdev writev readv size > 128k ...passed 00:39:49.921 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:49.921 Test: blockdev comparev and writev ...[2024-11-20 00:04:24.038891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:49.921 [2024-11-20 00:04:24.038928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:49.921 [2024-11-20 00:04:24.038952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:49.921 [2024-11-20 00:04:24.038969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:49.921 [2024-11-20 00:04:24.039399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:49.921 [2024-11-20 00:04:24.039427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:49.921 [2024-11-20 00:04:24.039451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:49.921 [2024-11-20 00:04:24.039468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:49.921 [2024-11-20 00:04:24.039885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:49.921 [2024-11-20 00:04:24.039911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:49.921 [2024-11-20 00:04:24.039941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:49.921 [2024-11-20 00:04:24.039959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:49.921 [2024-11-20 00:04:24.040373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:49.921 [2024-11-20 00:04:24.040398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:49.921 [2024-11-20 00:04:24.040419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:49.921 [2024-11-20 00:04:24.040435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:49.921 passed 00:39:49.921 Test: blockdev nvme passthru rw ...passed 00:39:49.921 Test: blockdev nvme passthru vendor specific ...[2024-11-20 00:04:24.123355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:49.921 [2024-11-20 00:04:24.123382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:49.921 [2024-11-20 00:04:24.123530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:49.921 [2024-11-20 00:04:24.123555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:49.921 [2024-11-20 00:04:24.123697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:49.921 [2024-11-20 00:04:24.123722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:49.921 [2024-11-20 00:04:24.123866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:49.921 [2024-11-20 00:04:24.123890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:49.921 passed 00:39:49.921 Test: blockdev nvme admin passthru ...passed 00:39:49.921 Test: blockdev copy ...passed 00:39:49.921 00:39:49.921 Run Summary: Type Total Ran Passed Failed Inactive 00:39:49.921 suites 1 1 n/a 0 0 00:39:49.921 tests 23 23 23 0 0 00:39:49.921 asserts 152 152 152 0 n/a 00:39:49.921 00:39:49.921 Elapsed time = 1.177 seconds 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:50.179 rmmod nvme_tcp 00:39:50.179 rmmod nvme_fabrics 00:39:50.179 rmmod nvme_keyring 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 390746 ']' 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 390746 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 390746 ']' 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 390746 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390746 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390746' 00:39:50.179 killing process with pid 390746 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 390746 00:39:50.179 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 390746 00:39:50.437 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:50.437 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:50.437 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:50.437 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:50.437 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:39:50.437 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:50.437 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:39:50.437 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:50.437 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:50.437 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:50.437 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:50.437 00:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:52.970 00:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:52.970 00:39:52.970 real 0m6.312s 00:39:52.970 user 0m8.347s 00:39:52.970 sys 0m2.498s 00:39:52.970 00:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:52.970 00:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:52.970 ************************************ 00:39:52.970 END TEST nvmf_bdevio 00:39:52.970 ************************************ 00:39:52.970 00:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:52.970 00:39:52.970 real 3m53.229s 00:39:52.970 user 8m50.692s 00:39:52.970 sys 1m23.067s 00:39:52.970 00:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:52.970 00:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:52.970 ************************************ 00:39:52.970 END TEST nvmf_target_core_interrupt_mode 00:39:52.970 ************************************ 00:39:52.970 00:04:26 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:52.970 00:04:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:52.970 00:04:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:52.970 00:04:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:52.970 ************************************ 00:39:52.970 START TEST nvmf_interrupt 00:39:52.970 ************************************ 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:52.971 * Looking for test storage... 00:39:52.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:52.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.971 --rc genhtml_branch_coverage=1 00:39:52.971 --rc genhtml_function_coverage=1 00:39:52.971 --rc genhtml_legend=1 00:39:52.971 --rc geninfo_all_blocks=1 00:39:52.971 --rc geninfo_unexecuted_blocks=1 00:39:52.971 00:39:52.971 ' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:52.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.971 --rc genhtml_branch_coverage=1 00:39:52.971 --rc genhtml_function_coverage=1 00:39:52.971 --rc genhtml_legend=1 00:39:52.971 --rc geninfo_all_blocks=1 00:39:52.971 --rc geninfo_unexecuted_blocks=1 00:39:52.971 00:39:52.971 ' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:52.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.971 --rc genhtml_branch_coverage=1 00:39:52.971 --rc genhtml_function_coverage=1 00:39:52.971 --rc genhtml_legend=1 00:39:52.971 --rc geninfo_all_blocks=1 00:39:52.971 --rc geninfo_unexecuted_blocks=1 00:39:52.971 00:39:52.971 ' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:52.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.971 --rc genhtml_branch_coverage=1 00:39:52.971 --rc genhtml_function_coverage=1 00:39:52.971 --rc genhtml_legend=1 00:39:52.971 --rc geninfo_all_blocks=1 00:39:52.971 --rc geninfo_unexecuted_blocks=1 00:39:52.971 00:39:52.971 ' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:52.971 00:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:54.880 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:54.880 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:54.880 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:54.880 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:54.881 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:54.881 00:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:54.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:54.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:39:54.881 00:39:54.881 --- 10.0.0.2 ping statistics --- 00:39:54.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:54.881 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:54.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:54.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:39:54.881 00:39:54.881 --- 10.0.0.1 ping statistics --- 00:39:54.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:54.881 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=392864 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 392864 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 392864 ']' 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:54.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:54.881 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:54.881 [2024-11-20 00:04:29.121784] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:54.881 [2024-11-20 00:04:29.122863] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:39:54.881 [2024-11-20 00:04:29.122916] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:55.141 [2024-11-20 00:04:29.198182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:55.141 [2024-11-20 00:04:29.241586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:55.141 [2024-11-20 00:04:29.241657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:55.141 [2024-11-20 00:04:29.241695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:55.141 [2024-11-20 00:04:29.241706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:55.141 [2024-11-20 00:04:29.241717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:55.141 [2024-11-20 00:04:29.242978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:55.141 [2024-11-20 00:04:29.242983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.141 [2024-11-20 00:04:29.323683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:55.141 [2024-11-20 00:04:29.323772] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:55.141 [2024-11-20 00:04:29.323943] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:55.141 5000+0 records in 00:39:55.141 5000+0 records out 00:39:55.141 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0145745 s, 703 MB/s 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:55.141 AIO0 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:55.141 [2024-11-20 00:04:29.431585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:55.400 [2024-11-20 00:04:29.455771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 392864 0 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 392864 0 idle 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392864 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392864 -w 256 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392864 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.25 reactor_0' 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392864 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.25 reactor_0 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 392864 1 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 392864 1 idle 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392864 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392864 -w 256 00:39:55.400 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392869 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1' 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392869 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=393022 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 392864 0 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 392864 0 busy 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392864 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392864 -w 256 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392864 root 20 0 128.2g 47616 34176 R 80.0 0.1 0:00.37 reactor_0' 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392864 root 20 0 128.2g 47616 34176 R 80.0 0.1 0:00.37 reactor_0 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=80.0 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=80 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:55.659 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 392864 1 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 392864 1 busy 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392864 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392864 -w 256 00:39:55.918 00:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:55.918 00:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392869 root 20 0 128.2g 47616 34176 R 86.7 0.1 0:00.20 reactor_1' 00:39:55.918 00:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392869 root 20 0 128.2g 47616 34176 R 86.7 0.1 0:00.20 reactor_1 00:39:55.918 00:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:55.918 00:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:55.918 00:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=86.7 00:39:55.918 00:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=86 00:39:55.918 00:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:55.918 00:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:55.918 00:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:55.918 00:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:55.918 00:04:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 393022 00:40:05.888 Initializing NVMe Controllers 00:40:05.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:05.888 Controller IO queue size 256, less than required. 00:40:05.888 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:05.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:05.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:05.888 Initialization complete. Launching workers. 00:40:05.888 ======================================================== 00:40:05.888 Latency(us) 00:40:05.888 Device Information : IOPS MiB/s Average min max 00:40:05.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 11863.48 46.34 21600.76 4188.07 62406.20 00:40:05.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13734.28 53.65 18654.07 3959.40 22722.56 00:40:05.888 ======================================================== 00:40:05.888 Total : 25597.76 99.99 20019.74 3959.40 62406.20 00:40:05.888 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 392864 0 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 392864 0 idle 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392864 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392864 -w 256 00:40:05.888 00:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392864 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:19.19 reactor_0' 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392864 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:19.19 reactor_0 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 392864 1 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 392864 1 idle 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392864 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:05.888 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:05.889 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:05.889 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:05.889 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:05.889 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:05.889 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392864 -w 256 00:40:05.889 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:06.147 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392869 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:08.97 reactor_1' 00:40:06.147 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392869 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:08.97 reactor_1 00:40:06.147 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:06.147 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:06.147 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:06.147 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:06.147 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:06.147 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:06.147 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:06.147 00:04:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:06.147 00:04:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:06.404 00:04:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:06.404 00:04:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:40:06.404 00:04:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:06.404 00:04:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:06.404 00:04:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 392864 0 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 392864 0 idle 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392864 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392864 -w 256 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392864 root 20 0 128.2g 59904 34176 R 0.0 0.1 0:19.28 reactor_0' 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392864 root 20 0 128.2g 59904 34176 R 0.0 0.1 0:19.28 reactor_0 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 392864 1 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 392864 1 idle 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=392864 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 392864 -w 256 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 392869 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:08.99 reactor_1' 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 392869 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:08.99 reactor_1 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:08.934 00:04:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:08.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:08.934 rmmod nvme_tcp 00:40:08.934 rmmod nvme_fabrics 00:40:08.934 rmmod nvme_keyring 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 392864 ']' 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 392864 00:40:08.934 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 392864 ']' 00:40:08.935 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 392864 00:40:08.935 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:40:08.935 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:08.935 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 392864 00:40:08.935 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:08.935 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:08.935 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 392864' 00:40:08.935 killing process with pid 392864 00:40:08.935 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 392864 00:40:08.935 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 392864 00:40:09.192 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:09.192 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:09.192 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:09.192 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:09.192 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:40:09.192 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:09.192 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:40:09.192 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:09.192 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:09.192 00:04:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:09.192 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:09.192 00:04:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:11.735 00:04:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:11.735 00:40:11.735 real 0m18.673s 00:40:11.735 user 0m35.504s 00:40:11.735 sys 0m7.191s 00:40:11.735 00:04:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:11.735 00:04:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:11.735 ************************************ 00:40:11.735 END TEST nvmf_interrupt 00:40:11.735 ************************************ 00:40:11.735 00:40:11.735 real 33m4.361s 00:40:11.735 user 87m23.559s 00:40:11.735 sys 8m2.869s 00:40:11.735 00:04:45 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:11.735 00:04:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:11.735 ************************************ 00:40:11.735 END TEST nvmf_tcp 00:40:11.735 ************************************ 00:40:11.735 00:04:45 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:40:11.735 00:04:45 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:11.735 00:04:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:11.735 00:04:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:11.735 00:04:45 -- common/autotest_common.sh@10 -- # set +x 00:40:11.735 ************************************ 00:40:11.735 START TEST spdkcli_nvmf_tcp 00:40:11.735 ************************************ 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:11.735 * Looking for test storage... 00:40:11.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:11.735 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:11.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.736 --rc genhtml_branch_coverage=1 00:40:11.736 --rc genhtml_function_coverage=1 00:40:11.736 --rc genhtml_legend=1 00:40:11.736 --rc geninfo_all_blocks=1 00:40:11.736 --rc geninfo_unexecuted_blocks=1 00:40:11.736 00:40:11.736 ' 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:11.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.736 --rc genhtml_branch_coverage=1 00:40:11.736 --rc genhtml_function_coverage=1 00:40:11.736 --rc genhtml_legend=1 00:40:11.736 --rc geninfo_all_blocks=1 00:40:11.736 --rc geninfo_unexecuted_blocks=1 00:40:11.736 00:40:11.736 ' 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:11.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.736 --rc genhtml_branch_coverage=1 00:40:11.736 --rc genhtml_function_coverage=1 00:40:11.736 --rc genhtml_legend=1 00:40:11.736 --rc geninfo_all_blocks=1 00:40:11.736 --rc geninfo_unexecuted_blocks=1 00:40:11.736 00:40:11.736 ' 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:11.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.736 --rc genhtml_branch_coverage=1 00:40:11.736 --rc genhtml_function_coverage=1 00:40:11.736 --rc genhtml_legend=1 00:40:11.736 --rc geninfo_all_blocks=1 00:40:11.736 --rc geninfo_unexecuted_blocks=1 00:40:11.736 00:40:11.736 ' 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:11.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=395022 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 395022 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 395022 ']' 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:11.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:11.736 [2024-11-20 00:04:45.697524] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:40:11.736 [2024-11-20 00:04:45.697617] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395022 ] 00:40:11.736 [2024-11-20 00:04:45.766227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:11.736 [2024-11-20 00:04:45.815936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:11.736 [2024-11-20 00:04:45.815940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:40:11.736 00:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:11.737 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:11.737 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:11.737 00:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:11.737 00:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:11.737 00:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:11.737 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:11.737 00:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:11.737 00:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:11.737 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:11.737 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:11.737 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:11.737 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:11.737 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:11.737 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:11.737 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:11.737 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:11.737 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:11.737 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:11.737 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:11.737 ' 00:40:15.029 [2024-11-20 00:04:48.651914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:15.960 [2024-11-20 00:04:49.920443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:18.486 [2024-11-20 00:04:52.291969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:20.386 [2024-11-20 00:04:54.334481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:21.758 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:21.758 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:21.758 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:21.758 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:21.758 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:21.758 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:21.758 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:21.758 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:21.758 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:21.758 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:21.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:21.758 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:21.758 00:04:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:21.758 00:04:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:21.758 00:04:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:21.758 00:04:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:21.758 00:04:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:21.758 00:04:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:21.758 00:04:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:21.758 00:04:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:22.323 00:04:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:22.323 00:04:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:22.323 00:04:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:22.323 00:04:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:22.323 00:04:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:22.323 00:04:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:22.323 00:04:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:22.323 00:04:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:22.323 00:04:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:22.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:22.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:22.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:22.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:22.323 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:22.323 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:22.323 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:22.323 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:22.323 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:22.323 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:22.323 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:22.323 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:22.323 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:22.323 ' 00:40:27.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:27.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:27.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:27.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:27.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:27.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:27.580 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:27.580 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:27.580 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:27.580 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:27.580 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:27.580 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:27.580 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:27.580 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:27.580 00:05:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:27.580 00:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:27.580 00:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.839 00:05:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 395022 00:40:27.839 00:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 395022 ']' 00:40:27.839 00:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 395022 00:40:27.839 00:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:40:27.839 00:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:27.839 00:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395022 00:40:27.839 00:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:27.839 00:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:27.839 00:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395022' 00:40:27.839 killing process with pid 395022 00:40:27.839 00:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 395022 00:40:27.839 00:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 395022 00:40:27.839 00:05:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:27.839 00:05:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:27.839 00:05:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 395022 ']' 00:40:27.839 00:05:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 395022 00:40:27.839 00:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 395022 ']' 00:40:27.839 00:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 395022 00:40:27.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (395022) - No such process 00:40:27.839 00:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 395022 is not found' 00:40:27.839 Process with pid 395022 is not found 00:40:27.839 00:05:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:27.839 00:05:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:27.839 00:05:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:27.839 00:40:27.839 real 0m16.630s 00:40:27.839 user 0m35.569s 00:40:27.839 sys 0m0.778s 00:40:27.839 00:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.839 00:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:27.839 ************************************ 00:40:27.839 END TEST spdkcli_nvmf_tcp 00:40:27.839 ************************************ 00:40:27.839 00:05:02 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:27.839 00:05:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:27.839 00:05:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:27.839 00:05:02 -- common/autotest_common.sh@10 -- # set +x 00:40:28.097 ************************************ 00:40:28.097 START TEST nvmf_identify_passthru 00:40:28.097 ************************************ 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:28.097 * Looking for test storage... 00:40:28.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:28.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.097 --rc genhtml_branch_coverage=1 00:40:28.097 --rc genhtml_function_coverage=1 00:40:28.097 --rc genhtml_legend=1 00:40:28.097 --rc geninfo_all_blocks=1 00:40:28.097 --rc geninfo_unexecuted_blocks=1 00:40:28.097 00:40:28.097 ' 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:28.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.097 --rc genhtml_branch_coverage=1 00:40:28.097 --rc genhtml_function_coverage=1 00:40:28.097 --rc genhtml_legend=1 00:40:28.097 --rc geninfo_all_blocks=1 00:40:28.097 --rc geninfo_unexecuted_blocks=1 00:40:28.097 00:40:28.097 ' 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:28.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.097 --rc genhtml_branch_coverage=1 00:40:28.097 --rc genhtml_function_coverage=1 00:40:28.097 --rc genhtml_legend=1 00:40:28.097 --rc geninfo_all_blocks=1 00:40:28.097 --rc geninfo_unexecuted_blocks=1 00:40:28.097 00:40:28.097 ' 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:28.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:28.097 --rc genhtml_branch_coverage=1 00:40:28.097 --rc genhtml_function_coverage=1 00:40:28.097 --rc genhtml_legend=1 00:40:28.097 --rc geninfo_all_blocks=1 00:40:28.097 --rc geninfo_unexecuted_blocks=1 00:40:28.097 00:40:28.097 ' 00:40:28.097 00:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:28.097 00:05:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.097 00:05:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.097 00:05:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.097 00:05:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:28.097 00:05:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:28.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:28.097 00:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:28.097 00:05:02 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:28.097 00:05:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.097 00:05:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.097 00:05:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.097 00:05:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:28.097 00:05:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:28.097 00:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:28.097 00:05:02 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:28.097 00:05:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:29.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:29.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:29.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:29.998 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:29.998 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:30.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:30.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:40:30.257 00:40:30.257 --- 10.0.0.2 ping statistics --- 00:40:30.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:30.257 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:30.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:30.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:40:30.257 00:40:30.257 --- 10.0.0.1 ping statistics --- 00:40:30.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:30.257 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:30.257 00:05:04 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:30.257 00:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:30.257 00:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:40:30.257 00:05:04 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:40:30.257 00:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:30.257 00:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:30.257 00:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:30.257 00:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:30.257 00:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:34.546 00:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:34.546 00:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:34.546 00:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:34.546 00:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:38.727 00:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:38.727 00:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:38.727 00:05:12 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:38.727 00:05:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:38.727 00:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:38.727 00:05:12 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:38.727 00:05:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:38.727 00:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=399557 00:40:38.727 00:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:38.727 00:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:38.727 00:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 399557 00:40:38.727 00:05:12 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 399557 ']' 00:40:38.727 00:05:12 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:38.727 00:05:12 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:38.727 00:05:12 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:38.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:38.727 00:05:12 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:38.727 00:05:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:38.727 [2024-11-20 00:05:13.018654] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:40:38.727 [2024-11-20 00:05:13.018743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:38.985 [2024-11-20 00:05:13.097690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:38.985 [2024-11-20 00:05:13.146763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:38.985 [2024-11-20 00:05:13.146820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:38.985 [2024-11-20 00:05:13.146832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:38.985 [2024-11-20 00:05:13.146843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:38.985 [2024-11-20 00:05:13.146853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:38.985 [2024-11-20 00:05:13.148327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:38.985 [2024-11-20 00:05:13.148410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:38.985 [2024-11-20 00:05:13.148351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:38.985 [2024-11-20 00:05:13.148413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:38.985 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:38.985 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:40:38.985 00:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:38.985 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.985 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:38.985 INFO: Log level set to 20 00:40:38.985 INFO: Requests: 00:40:38.985 { 00:40:38.985 "jsonrpc": "2.0", 00:40:38.985 "method": "nvmf_set_config", 00:40:38.985 "id": 1, 00:40:38.985 "params": { 00:40:38.985 "admin_cmd_passthru": { 00:40:38.985 "identify_ctrlr": true 00:40:38.985 } 00:40:38.985 } 00:40:38.985 } 00:40:38.985 00:40:38.985 INFO: response: 00:40:38.985 { 00:40:38.985 "jsonrpc": "2.0", 00:40:38.985 "id": 1, 00:40:38.985 "result": true 00:40:38.985 } 00:40:38.985 00:40:38.985 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:38.985 00:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:38.985 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.985 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:38.985 INFO: Setting log level to 20 00:40:38.985 INFO: Setting log level to 20 00:40:38.985 INFO: Log level set to 20 00:40:38.985 INFO: Log level set to 20 00:40:38.985 INFO: Requests: 00:40:38.985 { 00:40:38.985 "jsonrpc": "2.0", 00:40:38.985 "method": "framework_start_init", 00:40:38.985 "id": 1 00:40:38.985 } 00:40:38.985 00:40:38.985 INFO: Requests: 00:40:38.985 { 00:40:38.985 "jsonrpc": "2.0", 00:40:38.985 "method": "framework_start_init", 00:40:38.985 "id": 1 00:40:38.985 } 00:40:38.985 00:40:39.243 [2024-11-20 00:05:13.372252] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:39.243 INFO: response: 00:40:39.243 { 00:40:39.243 "jsonrpc": "2.0", 00:40:39.243 "id": 1, 00:40:39.243 "result": true 00:40:39.243 } 00:40:39.243 00:40:39.243 INFO: response: 00:40:39.243 { 00:40:39.243 "jsonrpc": "2.0", 00:40:39.243 "id": 1, 00:40:39.243 "result": true 00:40:39.243 } 00:40:39.243 00:40:39.243 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.243 00:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:39.243 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.243 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:39.243 INFO: Setting log level to 40 00:40:39.243 INFO: Setting log level to 40 00:40:39.243 INFO: Setting log level to 40 00:40:39.243 [2024-11-20 00:05:13.382444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:39.243 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.243 00:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:39.243 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:39.243 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:39.243 00:05:13 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:39.243 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.243 00:05:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:42.539 Nvme0n1 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:42.539 [2024-11-20 00:05:16.281846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:42.539 [ 00:40:42.539 { 00:40:42.539 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:42.539 "subtype": "Discovery", 00:40:42.539 "listen_addresses": [], 00:40:42.539 "allow_any_host": true, 00:40:42.539 "hosts": [] 00:40:42.539 }, 00:40:42.539 { 00:40:42.539 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:42.539 "subtype": "NVMe", 00:40:42.539 "listen_addresses": [ 00:40:42.539 { 00:40:42.539 "trtype": "TCP", 00:40:42.539 "adrfam": "IPv4", 00:40:42.539 "traddr": "10.0.0.2", 00:40:42.539 "trsvcid": "4420" 00:40:42.539 } 00:40:42.539 ], 00:40:42.539 "allow_any_host": true, 00:40:42.539 "hosts": [], 00:40:42.539 "serial_number": "SPDK00000000000001", 00:40:42.539 "model_number": "SPDK bdev Controller", 00:40:42.539 "max_namespaces": 1, 00:40:42.539 "min_cntlid": 1, 00:40:42.539 "max_cntlid": 65519, 00:40:42.539 "namespaces": [ 00:40:42.539 { 00:40:42.539 "nsid": 1, 00:40:42.539 "bdev_name": "Nvme0n1", 00:40:42.539 "name": "Nvme0n1", 00:40:42.539 "nguid": "3920CD236DAF4FA88824697B9969DB8A", 00:40:42.539 "uuid": "3920cd23-6daf-4fa8-8824-697b9969db8a" 00:40:42.539 } 00:40:42.539 ] 00:40:42.539 } 00:40:42.539 ] 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:42.539 00:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:42.539 00:05:16 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:42.539 00:05:16 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:42.539 00:05:16 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:42.539 00:05:16 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:42.539 00:05:16 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:42.539 00:05:16 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:42.539 rmmod nvme_tcp 00:40:42.539 rmmod nvme_fabrics 00:40:42.539 rmmod nvme_keyring 00:40:42.539 00:05:16 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:42.539 00:05:16 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:42.539 00:05:16 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:42.539 00:05:16 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 399557 ']' 00:40:42.539 00:05:16 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 399557 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 399557 ']' 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 399557 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 399557 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 399557' 00:40:42.539 killing process with pid 399557 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 399557 00:40:42.539 00:05:16 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 399557 00:40:43.912 00:05:18 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:43.912 00:05:18 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:43.912 00:05:18 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:43.912 00:05:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:43.912 00:05:18 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:40:43.912 00:05:18 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:43.912 00:05:18 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:40:43.912 00:05:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:43.912 00:05:18 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:43.912 00:05:18 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:43.912 00:05:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:43.912 00:05:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.447 00:05:20 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:46.447 00:40:46.447 real 0m18.067s 00:40:46.447 user 0m26.860s 00:40:46.447 sys 0m2.224s 00:40:46.447 00:05:20 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:46.447 00:05:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.447 ************************************ 00:40:46.447 END TEST nvmf_identify_passthru 00:40:46.447 ************************************ 00:40:46.447 00:05:20 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:46.447 00:05:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:46.447 00:05:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:46.447 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:40:46.447 ************************************ 00:40:46.447 START TEST nvmf_dif 00:40:46.447 ************************************ 00:40:46.447 00:05:20 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:46.447 * Looking for test storage... 00:40:46.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:46.447 00:05:20 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:46.447 00:05:20 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:40:46.447 00:05:20 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:46.447 00:05:20 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:46.447 00:05:20 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:46.447 00:05:20 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:46.447 00:05:20 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:46.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.447 --rc genhtml_branch_coverage=1 00:40:46.447 --rc genhtml_function_coverage=1 00:40:46.447 --rc genhtml_legend=1 00:40:46.447 --rc geninfo_all_blocks=1 00:40:46.447 --rc geninfo_unexecuted_blocks=1 00:40:46.447 00:40:46.447 ' 00:40:46.447 00:05:20 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:46.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.447 --rc genhtml_branch_coverage=1 00:40:46.447 --rc genhtml_function_coverage=1 00:40:46.447 --rc genhtml_legend=1 00:40:46.447 --rc geninfo_all_blocks=1 00:40:46.447 --rc geninfo_unexecuted_blocks=1 00:40:46.447 00:40:46.447 ' 00:40:46.447 00:05:20 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:46.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.447 --rc genhtml_branch_coverage=1 00:40:46.447 --rc genhtml_function_coverage=1 00:40:46.447 --rc genhtml_legend=1 00:40:46.447 --rc geninfo_all_blocks=1 00:40:46.447 --rc geninfo_unexecuted_blocks=1 00:40:46.447 00:40:46.447 ' 00:40:46.447 00:05:20 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:46.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.447 --rc genhtml_branch_coverage=1 00:40:46.447 --rc genhtml_function_coverage=1 00:40:46.447 --rc genhtml_legend=1 00:40:46.447 --rc geninfo_all_blocks=1 00:40:46.448 --rc geninfo_unexecuted_blocks=1 00:40:46.448 00:40:46.448 ' 00:40:46.448 00:05:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:46.448 00:05:20 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:46.448 00:05:20 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:46.448 00:05:20 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:46.448 00:05:20 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:46.448 00:05:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.448 00:05:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.448 00:05:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.448 00:05:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:46.448 00:05:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:46.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:46.448 00:05:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:46.448 00:05:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:46.448 00:05:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:46.448 00:05:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:46.448 00:05:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:46.448 00:05:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:46.448 00:05:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:46.448 00:05:20 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:46.448 00:05:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:48.350 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:48.350 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:48.350 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:48.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:48.350 00:05:22 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:48.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:48.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:40:48.351 00:40:48.351 --- 10.0.0.2 ping statistics --- 00:40:48.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.351 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:48.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:48.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:40:48.351 00:40:48.351 --- 10.0.0.1 ping statistics --- 00:40:48.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.351 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:48.351 00:05:22 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:49.724 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:49.724 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:49.724 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:49.724 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:49.724 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:49.724 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:49.724 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:49.724 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:49.724 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:49.724 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:49.724 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:49.724 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:49.724 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:49.724 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:49.724 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:49.724 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:49.724 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:49.724 00:05:23 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:49.724 00:05:23 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:49.724 00:05:23 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:49.724 00:05:23 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:49.724 00:05:23 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:49.724 00:05:23 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:49.724 00:05:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:49.724 00:05:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:49.724 00:05:23 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:49.724 00:05:23 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:49.724 00:05:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:49.724 00:05:23 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=402799 00:40:49.724 00:05:23 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:49.724 00:05:23 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 402799 00:40:49.724 00:05:23 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 402799 ']' 00:40:49.724 00:05:23 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:49.724 00:05:23 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:49.724 00:05:23 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:49.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:49.724 00:05:23 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:49.724 00:05:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:49.724 [2024-11-20 00:05:24.024328] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:40:49.724 [2024-11-20 00:05:24.024434] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:49.982 [2024-11-20 00:05:24.096680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:49.982 [2024-11-20 00:05:24.141656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:49.982 [2024-11-20 00:05:24.141709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:49.982 [2024-11-20 00:05:24.141737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:49.982 [2024-11-20 00:05:24.141748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:49.982 [2024-11-20 00:05:24.141758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:49.982 [2024-11-20 00:05:24.142356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:49.982 00:05:24 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:49.982 00:05:24 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:40:49.982 00:05:24 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:49.982 00:05:24 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:49.982 00:05:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:49.982 00:05:24 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:49.982 00:05:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:49.982 00:05:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:49.982 00:05:24 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.982 00:05:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:49.982 [2024-11-20 00:05:24.288780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:50.245 00:05:24 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.245 00:05:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:50.245 00:05:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:50.245 00:05:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:50.245 00:05:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:50.245 ************************************ 00:40:50.245 START TEST fio_dif_1_default 00:40:50.245 ************************************ 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:50.245 bdev_null0 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:50.245 [2024-11-20 00:05:24.349143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:50.245 { 00:40:50.245 "params": { 00:40:50.245 "name": "Nvme$subsystem", 00:40:50.245 "trtype": "$TEST_TRANSPORT", 00:40:50.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:50.245 "adrfam": "ipv4", 00:40:50.245 "trsvcid": "$NVMF_PORT", 00:40:50.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:50.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:50.245 "hdgst": ${hdgst:-false}, 00:40:50.245 "ddgst": ${ddgst:-false} 00:40:50.245 }, 00:40:50.245 "method": "bdev_nvme_attach_controller" 00:40:50.245 } 00:40:50.245 EOF 00:40:50.245 )") 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:50.245 "params": { 00:40:50.245 "name": "Nvme0", 00:40:50.245 "trtype": "tcp", 00:40:50.245 "traddr": "10.0.0.2", 00:40:50.245 "adrfam": "ipv4", 00:40:50.245 "trsvcid": "4420", 00:40:50.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:50.245 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:50.245 "hdgst": false, 00:40:50.245 "ddgst": false 00:40:50.245 }, 00:40:50.245 "method": "bdev_nvme_attach_controller" 00:40:50.245 }' 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:50.245 00:05:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:50.503 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:50.503 fio-3.35 00:40:50.503 Starting 1 thread 00:41:02.721 00:41:02.721 filename0: (groupid=0, jobs=1): err= 0: pid=403027: Wed Nov 20 00:05:35 2024 00:41:02.721 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:41:02.721 slat (nsec): min=4656, max=53642, avg=9458.33, stdev=3020.71 00:41:02.721 clat (usec): min=40822, max=45900, avg=40990.50, stdev=316.44 00:41:02.721 lat (usec): min=40830, max=45915, avg=40999.96, stdev=316.35 00:41:02.721 clat percentiles (usec): 00:41:02.721 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:02.721 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:02.721 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:02.721 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:41:02.721 | 99.99th=[45876] 00:41:02.721 bw ( KiB/s): min= 384, max= 416, per=99.47%, avg=388.80, stdev=11.72, samples=20 00:41:02.721 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:41:02.721 lat (msec) : 50=100.00% 00:41:02.721 cpu : usr=91.42%, sys=8.29%, ctx=24, majf=0, minf=227 00:41:02.721 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.721 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.721 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:02.721 00:41:02.721 Run status group 0 (all jobs): 00:41:02.721 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10009-10009msec 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.721 00:41:02.721 real 0m11.093s 00:41:02.721 user 0m10.408s 00:41:02.721 sys 0m1.113s 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:02.721 ************************************ 00:41:02.721 END TEST fio_dif_1_default 00:41:02.721 ************************************ 00:41:02.721 00:05:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:02.721 00:05:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:02.721 00:05:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:02.721 00:05:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:02.721 ************************************ 00:41:02.721 START TEST fio_dif_1_multi_subsystems 00:41:02.721 ************************************ 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.721 bdev_null0 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.721 [2024-11-20 00:05:35.488183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.721 bdev_null1 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.721 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:02.721 { 00:41:02.721 "params": { 00:41:02.721 "name": "Nvme$subsystem", 00:41:02.721 "trtype": "$TEST_TRANSPORT", 00:41:02.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:02.721 "adrfam": "ipv4", 00:41:02.722 "trsvcid": "$NVMF_PORT", 00:41:02.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:02.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:02.722 "hdgst": ${hdgst:-false}, 00:41:02.722 "ddgst": ${ddgst:-false} 00:41:02.722 }, 00:41:02.722 "method": "bdev_nvme_attach_controller" 00:41:02.722 } 00:41:02.722 EOF 00:41:02.722 )") 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:02.722 { 00:41:02.722 "params": { 00:41:02.722 "name": "Nvme$subsystem", 00:41:02.722 "trtype": "$TEST_TRANSPORT", 00:41:02.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:02.722 "adrfam": "ipv4", 00:41:02.722 "trsvcid": "$NVMF_PORT", 00:41:02.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:02.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:02.722 "hdgst": ${hdgst:-false}, 00:41:02.722 "ddgst": ${ddgst:-false} 00:41:02.722 }, 00:41:02.722 "method": "bdev_nvme_attach_controller" 00:41:02.722 } 00:41:02.722 EOF 00:41:02.722 )") 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:02.722 "params": { 00:41:02.722 "name": "Nvme0", 00:41:02.722 "trtype": "tcp", 00:41:02.722 "traddr": "10.0.0.2", 00:41:02.722 "adrfam": "ipv4", 00:41:02.722 "trsvcid": "4420", 00:41:02.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:02.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:02.722 "hdgst": false, 00:41:02.722 "ddgst": false 00:41:02.722 }, 00:41:02.722 "method": "bdev_nvme_attach_controller" 00:41:02.722 },{ 00:41:02.722 "params": { 00:41:02.722 "name": "Nvme1", 00:41:02.722 "trtype": "tcp", 00:41:02.722 "traddr": "10.0.0.2", 00:41:02.722 "adrfam": "ipv4", 00:41:02.722 "trsvcid": "4420", 00:41:02.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:02.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:02.722 "hdgst": false, 00:41:02.722 "ddgst": false 00:41:02.722 }, 00:41:02.722 "method": "bdev_nvme_attach_controller" 00:41:02.722 }' 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:02.722 00:05:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.722 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:02.722 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:02.722 fio-3.35 00:41:02.722 Starting 2 threads 00:41:12.686 00:41:12.686 filename0: (groupid=0, jobs=1): err= 0: pid=404424: Wed Nov 20 00:05:46 2024 00:41:12.686 read: IOPS=190, BW=760KiB/s (778kB/s)(7632KiB/10040msec) 00:41:12.686 slat (nsec): min=7591, max=69561, avg=9667.89, stdev=3220.76 00:41:12.686 clat (usec): min=553, max=42344, avg=21018.06, stdev=20345.27 00:41:12.686 lat (usec): min=561, max=42356, avg=21027.73, stdev=20345.09 00:41:12.686 clat percentiles (usec): 00:41:12.686 | 1.00th=[ 570], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 603], 00:41:12.686 | 30.00th=[ 627], 40.00th=[ 652], 50.00th=[40633], 60.00th=[41157], 00:41:12.686 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:12.686 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:41:12.686 | 99.99th=[42206] 00:41:12.686 bw ( KiB/s): min= 704, max= 768, per=66.23%, avg=761.60, stdev=19.70, samples=20 00:41:12.686 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:41:12.686 lat (usec) : 750=48.22%, 1000=1.10% 00:41:12.686 lat (msec) : 2=0.37%, 4=0.21%, 50=50.10% 00:41:12.686 cpu : usr=95.18%, sys=4.46%, ctx=20, majf=0, minf=218 00:41:12.686 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:12.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.686 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.686 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:12.686 filename1: (groupid=0, jobs=1): err= 0: pid=404425: Wed Nov 20 00:05:46 2024 00:41:12.686 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10025msec) 00:41:12.686 slat (nsec): min=7437, max=29449, avg=9640.76, stdev=2717.01 00:41:12.686 clat (usec): min=40693, max=42905, avg=41055.73, stdev=293.52 00:41:12.686 lat (usec): min=40701, max=42935, avg=41065.37, stdev=293.69 00:41:12.686 clat percentiles (usec): 00:41:12.686 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:12.686 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:12.686 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:41:12.686 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:41:12.686 | 99.99th=[42730] 00:41:12.686 bw ( KiB/s): min= 384, max= 416, per=33.77%, avg=388.80, stdev=11.72, samples=20 00:41:12.686 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:41:12.686 lat (msec) : 50=100.00% 00:41:12.686 cpu : usr=94.79%, sys=4.91%, ctx=13, majf=0, minf=79 00:41:12.686 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:12.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.686 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.686 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:12.686 00:41:12.686 Run status group 0 (all jobs): 00:41:12.686 READ: bw=1149KiB/s (1177kB/s), 389KiB/s-760KiB/s (399kB/s-778kB/s), io=11.3MiB (11.8MB), run=10025-10040msec 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.686 00:41:12.686 real 0m11.440s 00:41:12.686 user 0m20.395s 00:41:12.686 sys 0m1.285s 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:12.686 00:05:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:12.686 ************************************ 00:41:12.686 END TEST fio_dif_1_multi_subsystems 00:41:12.686 ************************************ 00:41:12.686 00:05:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:12.686 00:05:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:12.686 00:05:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:12.686 00:05:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:12.686 ************************************ 00:41:12.686 START TEST fio_dif_rand_params 00:41:12.686 ************************************ 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:12.686 bdev_null0 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.686 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:12.686 [2024-11-20 00:05:46.969327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:12.687 { 00:41:12.687 "params": { 00:41:12.687 "name": "Nvme$subsystem", 00:41:12.687 "trtype": "$TEST_TRANSPORT", 00:41:12.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:12.687 "adrfam": "ipv4", 00:41:12.687 "trsvcid": "$NVMF_PORT", 00:41:12.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:12.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:12.687 "hdgst": ${hdgst:-false}, 00:41:12.687 "ddgst": ${ddgst:-false} 00:41:12.687 }, 00:41:12.687 "method": "bdev_nvme_attach_controller" 00:41:12.687 } 00:41:12.687 EOF 00:41:12.687 )") 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:12.687 00:05:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:12.687 "params": { 00:41:12.687 "name": "Nvme0", 00:41:12.687 "trtype": "tcp", 00:41:12.687 "traddr": "10.0.0.2", 00:41:12.687 "adrfam": "ipv4", 00:41:12.687 "trsvcid": "4420", 00:41:12.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:12.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:12.687 "hdgst": false, 00:41:12.687 "ddgst": false 00:41:12.687 }, 00:41:12.687 "method": "bdev_nvme_attach_controller" 00:41:12.687 }' 00:41:12.945 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:12.945 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:12.945 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:12.945 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:12.945 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:12.945 00:05:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:12.945 00:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:12.945 00:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:12.945 00:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:12.945 00:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:12.945 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:12.945 ... 00:41:12.945 fio-3.35 00:41:12.945 Starting 3 threads 00:41:19.499 00:41:19.499 filename0: (groupid=0, jobs=1): err= 0: pid=405821: Wed Nov 20 00:05:52 2024 00:41:19.499 read: IOPS=233, BW=29.1MiB/s (30.5MB/s)(146MiB/5007msec) 00:41:19.499 slat (nsec): min=4660, max=27286, avg=13996.74, stdev=1342.53 00:41:19.499 clat (usec): min=6053, max=52086, avg=12851.98, stdev=3428.70 00:41:19.499 lat (usec): min=6067, max=52100, avg=12865.98, stdev=3428.68 00:41:19.499 clat percentiles (usec): 00:41:19.499 | 1.00th=[ 7570], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11076], 00:41:19.499 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12649], 60.00th=[13173], 00:41:19.499 | 70.00th=[13698], 80.00th=[14484], 90.00th=[15270], 95.00th=[15664], 00:41:19.499 | 99.00th=[17433], 99.50th=[49546], 99.90th=[51643], 99.95th=[52167], 00:41:19.499 | 99.99th=[52167] 00:41:19.499 bw ( KiB/s): min=25804, max=33024, per=33.98%, avg=29793.20, stdev=1962.10, samples=10 00:41:19.499 iops : min= 201, max= 258, avg=232.70, stdev=15.46, samples=10 00:41:19.499 lat (msec) : 10=7.88%, 20=91.35%, 50=0.34%, 100=0.43% 00:41:19.499 cpu : usr=93.81%, sys=5.67%, ctx=8, majf=0, minf=132 00:41:19.499 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:19.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.499 issued rwts: total=1167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.499 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:19.499 filename0: (groupid=0, jobs=1): err= 0: pid=405822: Wed Nov 20 00:05:52 2024 00:41:19.499 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(150MiB/5043msec) 00:41:19.499 slat (nsec): min=4733, max=30804, avg=14982.18, stdev=2085.13 00:41:19.499 clat (usec): min=5152, max=53165, avg=12531.48, stdev=3884.15 00:41:19.499 lat (usec): min=5160, max=53179, avg=12546.46, stdev=3883.89 00:41:19.499 clat percentiles (usec): 00:41:19.499 | 1.00th=[ 7373], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10945], 00:41:19.499 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12256], 60.00th=[12649], 00:41:19.499 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14353], 95.00th=[15139], 00:41:19.499 | 99.00th=[17433], 99.50th=[46400], 99.90th=[51643], 99.95th=[53216], 00:41:19.499 | 99.99th=[53216] 00:41:19.499 bw ( KiB/s): min=27648, max=33280, per=35.03%, avg=30720.00, stdev=1591.87, samples=10 00:41:19.499 iops : min= 216, max= 260, avg=240.00, stdev=12.44, samples=10 00:41:19.499 lat (msec) : 10=8.65%, 20=90.43%, 50=0.58%, 100=0.33% 00:41:19.499 cpu : usr=93.65%, sys=5.77%, ctx=9, majf=0, minf=120 00:41:19.499 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:19.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.499 issued rwts: total=1202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.499 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:19.499 filename0: (groupid=0, jobs=1): err= 0: pid=405823: Wed Nov 20 00:05:52 2024 00:41:19.499 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(136MiB/5045msec) 00:41:19.499 slat (nsec): min=4546, max=30625, avg=14048.58, stdev=1992.20 00:41:19.499 clat (usec): min=5102, max=54859, avg=13867.77, stdev=5442.37 00:41:19.499 lat (usec): min=5110, max=54873, avg=13881.82, stdev=5442.30 00:41:19.499 clat percentiles (usec): 00:41:19.499 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10814], 20.00th=[11600], 00:41:19.499 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13173], 60.00th=[13829], 00:41:19.499 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15533], 95.00th=[16319], 00:41:19.499 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53216], 99.95th=[54789], 00:41:19.499 | 99.99th=[54789] 00:41:19.499 bw ( KiB/s): min=22528, max=30208, per=31.65%, avg=27750.40, stdev=2414.19, samples=10 00:41:19.499 iops : min= 176, max= 236, avg=216.80, stdev=18.86, samples=10 00:41:19.499 lat (msec) : 10=3.59%, 20=94.57%, 50=0.46%, 100=1.38% 00:41:19.499 cpu : usr=92.43%, sys=6.78%, ctx=140, majf=0, minf=110 00:41:19.499 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:19.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.499 issued rwts: total=1087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.499 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:19.499 00:41:19.499 Run status group 0 (all jobs): 00:41:19.499 READ: bw=85.6MiB/s (89.8MB/s), 26.9MiB/s-29.8MiB/s (28.2MB/s-31.2MB/s), io=432MiB (453MB), run=5007-5045msec 00:41:19.499 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:19.499 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:19.499 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 bdev_null0 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 [2024-11-20 00:05:53.196919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 bdev_null1 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 bdev_null2 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:19.500 { 00:41:19.500 "params": { 00:41:19.500 "name": "Nvme$subsystem", 00:41:19.500 "trtype": "$TEST_TRANSPORT", 00:41:19.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:19.500 "adrfam": "ipv4", 00:41:19.500 "trsvcid": "$NVMF_PORT", 00:41:19.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:19.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:19.500 "hdgst": ${hdgst:-false}, 00:41:19.500 "ddgst": ${ddgst:-false} 00:41:19.500 }, 00:41:19.500 "method": "bdev_nvme_attach_controller" 00:41:19.500 } 00:41:19.500 EOF 00:41:19.500 )") 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:19.500 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:19.501 { 00:41:19.501 "params": { 00:41:19.501 "name": "Nvme$subsystem", 00:41:19.501 "trtype": "$TEST_TRANSPORT", 00:41:19.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:19.501 "adrfam": "ipv4", 00:41:19.501 "trsvcid": "$NVMF_PORT", 00:41:19.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:19.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:19.501 "hdgst": ${hdgst:-false}, 00:41:19.501 "ddgst": ${ddgst:-false} 00:41:19.501 }, 00:41:19.501 "method": "bdev_nvme_attach_controller" 00:41:19.501 } 00:41:19.501 EOF 00:41:19.501 )") 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:19.501 { 00:41:19.501 "params": { 00:41:19.501 "name": "Nvme$subsystem", 00:41:19.501 "trtype": "$TEST_TRANSPORT", 00:41:19.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:19.501 "adrfam": "ipv4", 00:41:19.501 "trsvcid": "$NVMF_PORT", 00:41:19.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:19.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:19.501 "hdgst": ${hdgst:-false}, 00:41:19.501 "ddgst": ${ddgst:-false} 00:41:19.501 }, 00:41:19.501 "method": "bdev_nvme_attach_controller" 00:41:19.501 } 00:41:19.501 EOF 00:41:19.501 )") 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:19.501 "params": { 00:41:19.501 "name": "Nvme0", 00:41:19.501 "trtype": "tcp", 00:41:19.501 "traddr": "10.0.0.2", 00:41:19.501 "adrfam": "ipv4", 00:41:19.501 "trsvcid": "4420", 00:41:19.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:19.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:19.501 "hdgst": false, 00:41:19.501 "ddgst": false 00:41:19.501 }, 00:41:19.501 "method": "bdev_nvme_attach_controller" 00:41:19.501 },{ 00:41:19.501 "params": { 00:41:19.501 "name": "Nvme1", 00:41:19.501 "trtype": "tcp", 00:41:19.501 "traddr": "10.0.0.2", 00:41:19.501 "adrfam": "ipv4", 00:41:19.501 "trsvcid": "4420", 00:41:19.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:19.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:19.501 "hdgst": false, 00:41:19.501 "ddgst": false 00:41:19.501 }, 00:41:19.501 "method": "bdev_nvme_attach_controller" 00:41:19.501 },{ 00:41:19.501 "params": { 00:41:19.501 "name": "Nvme2", 00:41:19.501 "trtype": "tcp", 00:41:19.501 "traddr": "10.0.0.2", 00:41:19.501 "adrfam": "ipv4", 00:41:19.501 "trsvcid": "4420", 00:41:19.501 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:19.501 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:19.501 "hdgst": false, 00:41:19.501 "ddgst": false 00:41:19.501 }, 00:41:19.501 "method": "bdev_nvme_attach_controller" 00:41:19.501 }' 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:19.501 00:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:19.501 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:19.501 ... 00:41:19.501 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:19.501 ... 00:41:19.501 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:19.501 ... 00:41:19.501 fio-3.35 00:41:19.501 Starting 24 threads 00:41:31.705 00:41:31.705 filename0: (groupid=0, jobs=1): err= 0: pid=406687: Wed Nov 20 00:06:04 2024 00:41:31.705 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10048msec) 00:41:31.705 slat (usec): min=21, max=120, avg=74.18, stdev=15.85 00:41:31.705 clat (msec): min=164, max=541, avg=303.93, stdev=59.72 00:41:31.705 lat (msec): min=164, max=541, avg=304.00, stdev=59.73 00:41:31.705 clat percentiles (msec): 00:41:31.705 | 1.00th=[ 188], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 247], 00:41:31.705 | 30.00th=[ 288], 40.00th=[ 317], 50.00th=[ 330], 60.00th=[ 334], 00:41:31.705 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 363], 00:41:31.705 | 99.00th=[ 376], 99.50th=[ 409], 99.90th=[ 542], 99.95th=[ 542], 00:41:31.705 | 99.99th=[ 542] 00:41:31.705 bw ( KiB/s): min= 128, max= 384, per=3.55%, avg=204.80, stdev=70.15, samples=20 00:41:31.705 iops : min= 32, max= 96, avg=51.20, stdev=17.54, samples=20 00:41:31.705 lat (msec) : 250=21.59%, 500=78.03%, 750=0.38% 00:41:31.705 cpu : usr=98.46%, sys=1.11%, ctx=14, majf=0, minf=25 00:41:31.705 IO depths : 1=1.3%, 2=7.6%, 4=25.0%, 8=54.9%, 16=11.2%, 32=0.0%, >=64=0.0% 00:41:31.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.705 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.705 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.705 filename0: (groupid=0, jobs=1): err= 0: pid=406688: Wed Nov 20 00:06:04 2024 00:41:31.705 read: IOPS=62, BW=248KiB/s (254kB/s)(2496KiB/10061msec) 00:41:31.705 slat (usec): min=8, max=134, avg=40.01, stdev=31.66 00:41:31.705 clat (msec): min=162, max=395, avg=257.61, stdev=57.55 00:41:31.705 lat (msec): min=162, max=395, avg=257.65, stdev=57.57 00:41:31.705 clat percentiles (msec): 00:41:31.705 | 1.00th=[ 163], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 190], 00:41:31.705 | 30.00th=[ 234], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 266], 00:41:31.705 | 70.00th=[ 296], 80.00th=[ 313], 90.00th=[ 342], 95.00th=[ 351], 00:41:31.705 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 397], 99.95th=[ 397], 00:41:31.705 | 99.99th=[ 397] 00:41:31.705 bw ( KiB/s): min= 128, max= 384, per=4.23%, avg=243.20, stdev=70.72, samples=20 00:41:31.705 iops : min= 32, max= 96, avg=60.80, stdev=17.68, samples=20 00:41:31.705 lat (msec) : 250=36.22%, 500=63.78% 00:41:31.705 cpu : usr=98.05%, sys=1.34%, ctx=39, majf=0, minf=20 00:41:31.705 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:41:31.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.705 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.706 filename0: (groupid=0, jobs=1): err= 0: pid=406689: Wed Nov 20 00:06:04 2024 00:41:31.706 read: IOPS=67, BW=268KiB/s (275kB/s)(2688KiB/10027msec) 00:41:31.706 slat (usec): min=4, max=121, avg=51.38, stdev=33.47 00:41:31.706 clat (msec): min=3, max=362, avg=238.27, stdev=102.67 00:41:31.706 lat (msec): min=3, max=362, avg=238.32, stdev=102.69 00:41:31.706 clat percentiles (msec): 00:41:31.706 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 67], 20.00th=[ 169], 00:41:31.706 | 30.00th=[ 190], 40.00th=[ 241], 50.00th=[ 264], 60.00th=[ 279], 00:41:31.706 | 70.00th=[ 313], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 359], 00:41:31.706 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:41:31.706 | 99.99th=[ 363] 00:41:31.706 bw ( KiB/s): min= 128, max= 896, per=4.57%, avg=262.40, stdev=168.56, samples=20 00:41:31.706 iops : min= 32, max= 224, avg=65.60, stdev=42.14, samples=20 00:41:31.706 lat (msec) : 4=2.38%, 10=4.76%, 20=2.38%, 100=4.46%, 250=28.87% 00:41:31.706 lat (msec) : 500=57.14% 00:41:31.706 cpu : usr=97.74%, sys=1.44%, ctx=173, majf=0, minf=21 00:41:31.706 IO depths : 1=5.7%, 2=11.8%, 4=24.4%, 8=51.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:41:31.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.706 filename0: (groupid=0, jobs=1): err= 0: pid=406690: Wed Nov 20 00:06:04 2024 00:41:31.706 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10050msec) 00:41:31.706 slat (nsec): min=8036, max=80165, avg=16590.86, stdev=15554.03 00:41:31.706 clat (msec): min=159, max=544, avg=304.41, stdev=70.92 00:41:31.706 lat (msec): min=159, max=544, avg=304.42, stdev=70.91 00:41:31.706 clat percentiles (msec): 00:41:31.706 | 1.00th=[ 167], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 226], 00:41:31.706 | 30.00th=[ 264], 40.00th=[ 309], 50.00th=[ 326], 60.00th=[ 338], 00:41:31.706 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 372], 95.00th=[ 409], 00:41:31.706 | 99.00th=[ 460], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:41:31.706 | 99.99th=[ 542] 00:41:31.706 bw ( KiB/s): min= 128, max= 384, per=3.55%, avg=204.80, stdev=71.48, samples=20 00:41:31.706 iops : min= 32, max= 96, avg=51.20, stdev=17.87, samples=20 00:41:31.706 lat (msec) : 250=24.24%, 500=75.00%, 750=0.76% 00:41:31.706 cpu : usr=98.51%, sys=1.05%, ctx=48, majf=0, minf=17 00:41:31.706 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:41:31.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.706 filename0: (groupid=0, jobs=1): err= 0: pid=406691: Wed Nov 20 00:06:04 2024 00:41:31.706 read: IOPS=81, BW=326KiB/s (334kB/s)(3296KiB/10103msec) 00:41:31.706 slat (usec): min=7, max=131, avg=33.11, stdev=27.94 00:41:31.706 clat (msec): min=2, max=367, avg=195.77, stdev=77.42 00:41:31.706 lat (msec): min=2, max=367, avg=195.80, stdev=77.43 00:41:31.706 clat percentiles (msec): 00:41:31.706 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 67], 20.00th=[ 171], 00:41:31.706 | 30.00th=[ 190], 40.00th=[ 203], 50.00th=[ 211], 60.00th=[ 224], 00:41:31.706 | 70.00th=[ 232], 80.00th=[ 251], 90.00th=[ 271], 95.00th=[ 284], 00:41:31.706 | 99.00th=[ 334], 99.50th=[ 363], 99.90th=[ 368], 99.95th=[ 368], 00:41:31.706 | 99.99th=[ 368] 00:41:31.706 bw ( KiB/s): min= 256, max= 1024, per=5.63%, avg=323.20, stdev=168.98, samples=20 00:41:31.706 iops : min= 64, max= 256, avg=80.80, stdev=42.24, samples=20 00:41:31.706 lat (msec) : 4=3.88%, 10=1.94%, 20=3.88%, 100=3.64%, 250=66.99% 00:41:31.706 lat (msec) : 500=19.66% 00:41:31.706 cpu : usr=98.08%, sys=1.43%, ctx=34, majf=0, minf=63 00:41:31.706 IO depths : 1=1.1%, 2=3.0%, 4=11.8%, 8=72.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:41:31.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 complete : 0=0.0%, 4=90.3%, 8=4.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 issued rwts: total=824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.706 filename0: (groupid=0, jobs=1): err= 0: pid=406692: Wed Nov 20 00:06:04 2024 00:41:31.706 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10051msec) 00:41:31.706 slat (usec): min=8, max=134, avg=36.73, stdev=16.64 00:41:31.706 clat (msec): min=136, max=437, avg=304.24, stdev=66.24 00:41:31.706 lat (msec): min=136, max=437, avg=304.28, stdev=66.24 00:41:31.706 clat percentiles (msec): 00:41:31.706 | 1.00th=[ 138], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 243], 00:41:31.706 | 30.00th=[ 292], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 334], 00:41:31.706 | 70.00th=[ 342], 80.00th=[ 351], 90.00th=[ 368], 95.00th=[ 376], 00:41:31.706 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 439], 99.95th=[ 439], 00:41:31.706 | 99.99th=[ 439] 00:41:31.706 bw ( KiB/s): min= 128, max= 384, per=3.55%, avg=204.80, stdev=75.51, samples=20 00:41:31.706 iops : min= 32, max= 96, avg=51.20, stdev=18.88, samples=20 00:41:31.706 lat (msec) : 250=21.97%, 500=78.03% 00:41:31.706 cpu : usr=98.46%, sys=1.11%, ctx=37, majf=0, minf=30 00:41:31.706 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:41:31.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.706 filename0: (groupid=0, jobs=1): err= 0: pid=406693: Wed Nov 20 00:06:04 2024 00:41:31.706 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10059msec) 00:41:31.706 slat (nsec): min=11013, max=85554, avg=39089.36, stdev=12395.47 00:41:31.706 clat (msec): min=164, max=412, avg=304.48, stdev=57.42 00:41:31.706 lat (msec): min=164, max=412, avg=304.52, stdev=57.42 00:41:31.706 clat percentiles (msec): 00:41:31.706 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 255], 00:41:31.706 | 30.00th=[ 296], 40.00th=[ 313], 50.00th=[ 321], 60.00th=[ 334], 00:41:31.706 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 368], 00:41:31.706 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 414], 99.95th=[ 414], 00:41:31.706 | 99.99th=[ 414] 00:41:31.706 bw ( KiB/s): min= 128, max= 256, per=3.55%, avg=204.80, stdev=62.85, samples=20 00:41:31.706 iops : min= 32, max= 64, avg=51.20, stdev=15.71, samples=20 00:41:31.706 lat (msec) : 250=18.94%, 500=81.06% 00:41:31.706 cpu : usr=98.52%, sys=1.06%, ctx=20, majf=0, minf=40 00:41:31.706 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:41:31.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.706 filename0: (groupid=0, jobs=1): err= 0: pid=406694: Wed Nov 20 00:06:04 2024 00:41:31.706 read: IOPS=78, BW=316KiB/s (323kB/s)(3184KiB/10080msec) 00:41:31.706 slat (nsec): min=7986, max=95325, avg=21205.49, stdev=19748.09 00:41:31.706 clat (msec): min=78, max=356, avg=202.13, stdev=59.23 00:41:31.706 lat (msec): min=78, max=356, avg=202.15, stdev=59.22 00:41:31.706 clat percentiles (msec): 00:41:31.706 | 1.00th=[ 80], 5.00th=[ 116], 10.00th=[ 124], 20.00th=[ 138], 00:41:31.706 | 30.00th=[ 169], 40.00th=[ 194], 50.00th=[ 209], 60.00th=[ 215], 00:41:31.706 | 70.00th=[ 228], 80.00th=[ 255], 90.00th=[ 275], 95.00th=[ 296], 00:41:31.706 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 355], 00:41:31.706 | 99.99th=[ 355] 00:41:31.706 bw ( KiB/s): min= 176, max= 513, per=5.44%, avg=312.05, stdev=88.14, samples=20 00:41:31.706 iops : min= 44, max= 128, avg=78.00, stdev=22.00, samples=20 00:41:31.706 lat (msec) : 100=4.02%, 250=73.62%, 500=22.36% 00:41:31.706 cpu : usr=98.31%, sys=1.17%, ctx=9, majf=0, minf=41 00:41:31.706 IO depths : 1=0.4%, 2=1.0%, 4=7.7%, 8=78.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:41:31.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 complete : 0=0.0%, 4=89.1%, 8=5.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 issued rwts: total=796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.706 filename1: (groupid=0, jobs=1): err= 0: pid=406695: Wed Nov 20 00:06:04 2024 00:41:31.706 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10048msec) 00:41:31.706 slat (nsec): min=8781, max=91761, avg=36798.05, stdev=13568.80 00:41:31.706 clat (msec): min=138, max=429, avg=304.13, stdev=60.32 00:41:31.706 lat (msec): min=138, max=429, avg=304.16, stdev=60.32 00:41:31.706 clat percentiles (msec): 00:41:31.706 | 1.00th=[ 165], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 255], 00:41:31.706 | 30.00th=[ 279], 40.00th=[ 313], 50.00th=[ 321], 60.00th=[ 334], 00:41:31.706 | 70.00th=[ 342], 80.00th=[ 351], 90.00th=[ 368], 95.00th=[ 376], 00:41:31.706 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 430], 99.95th=[ 430], 00:41:31.706 | 99.99th=[ 430] 00:41:31.706 bw ( KiB/s): min= 128, max= 368, per=3.55%, avg=204.80, stdev=72.60, samples=20 00:41:31.706 iops : min= 32, max= 92, avg=51.20, stdev=18.15, samples=20 00:41:31.706 lat (msec) : 250=19.32%, 500=80.68% 00:41:31.706 cpu : usr=98.03%, sys=1.31%, ctx=148, majf=0, minf=41 00:41:31.706 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:41:31.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.706 filename1: (groupid=0, jobs=1): err= 0: pid=406696: Wed Nov 20 00:06:04 2024 00:41:31.706 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10046msec) 00:41:31.706 slat (nsec): min=9682, max=67383, avg=34308.35, stdev=11026.88 00:41:31.706 clat (msec): min=187, max=410, avg=304.11, stdev=57.42 00:41:31.706 lat (msec): min=187, max=410, avg=304.14, stdev=57.43 00:41:31.706 clat percentiles (msec): 00:41:31.706 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 255], 00:41:31.706 | 30.00th=[ 296], 40.00th=[ 313], 50.00th=[ 321], 60.00th=[ 338], 00:41:31.706 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 368], 00:41:31.706 | 99.00th=[ 380], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:41:31.706 | 99.99th=[ 409] 00:41:31.706 bw ( KiB/s): min= 128, max= 384, per=3.55%, avg=204.80, stdev=75.33, samples=20 00:41:31.706 iops : min= 32, max= 96, avg=51.20, stdev=18.83, samples=20 00:41:31.706 lat (msec) : 250=18.94%, 500=81.06% 00:41:31.706 cpu : usr=98.37%, sys=1.10%, ctx=51, majf=0, minf=35 00:41:31.706 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:41:31.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.706 filename1: (groupid=0, jobs=1): err= 0: pid=406697: Wed Nov 20 00:06:04 2024 00:41:31.706 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10080msec) 00:41:31.706 slat (usec): min=8, max=155, avg=45.58, stdev=31.75 00:41:31.706 clat (msec): min=148, max=364, avg=264.83, stdev=63.45 00:41:31.706 lat (msec): min=148, max=364, avg=264.88, stdev=63.46 00:41:31.706 clat percentiles (msec): 00:41:31.706 | 1.00th=[ 148], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 190], 00:41:31.706 | 30.00th=[ 228], 40.00th=[ 257], 50.00th=[ 266], 60.00th=[ 296], 00:41:31.706 | 70.00th=[ 317], 80.00th=[ 334], 90.00th=[ 342], 95.00th=[ 351], 00:41:31.706 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:41:31.706 | 99.99th=[ 363] 00:41:31.706 bw ( KiB/s): min= 128, max= 384, per=4.11%, avg=236.80, stdev=71.29, samples=20 00:41:31.706 iops : min= 32, max= 96, avg=59.20, stdev=17.82, samples=20 00:41:31.706 lat (msec) : 250=33.22%, 500=66.78% 00:41:31.706 cpu : usr=98.01%, sys=1.29%, ctx=80, majf=0, minf=47 00:41:31.706 IO depths : 1=0.2%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:41:31.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.706 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.706 filename1: (groupid=0, jobs=1): err= 0: pid=406698: Wed Nov 20 00:06:04 2024 00:41:31.706 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10077msec) 00:41:31.706 slat (usec): min=6, max=129, avg=62.39, stdev=22.98 00:41:31.706 clat (msec): min=78, max=375, avg=287.36, stdev=77.68 00:41:31.706 lat (msec): min=78, max=375, avg=287.43, stdev=77.70 00:41:31.706 clat percentiles (msec): 00:41:31.706 | 1.00th=[ 80], 5.00th=[ 90], 10.00th=[ 188], 20.00th=[ 207], 00:41:31.706 | 30.00th=[ 255], 40.00th=[ 305], 50.00th=[ 317], 60.00th=[ 326], 00:41:31.706 | 70.00th=[ 342], 80.00th=[ 342], 90.00th=[ 363], 95.00th=[ 368], 00:41:31.706 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:41:31.707 | 99.99th=[ 376] 00:41:31.707 bw ( KiB/s): min= 128, max= 384, per=3.78%, avg=217.60, stdev=82.96, samples=20 00:41:31.707 iops : min= 32, max= 96, avg=54.40, stdev=20.74, samples=20 00:41:31.707 lat (msec) : 100=5.36%, 250=23.21%, 500=71.43% 00:41:31.707 cpu : usr=98.24%, sys=1.26%, ctx=47, majf=0, minf=40 00:41:31.707 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:41:31.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.707 filename1: (groupid=0, jobs=1): err= 0: pid=406699: Wed Nov 20 00:06:04 2024 00:41:31.707 read: IOPS=52, BW=209KiB/s (214kB/s)(2104KiB/10050msec) 00:41:31.707 slat (nsec): min=8017, max=89915, avg=15954.59, stdev=14159.82 00:41:31.707 clat (msec): min=67, max=475, avg=305.55, stdev=76.98 00:41:31.707 lat (msec): min=67, max=475, avg=305.56, stdev=76.98 00:41:31.707 clat percentiles (msec): 00:41:31.707 | 1.00th=[ 68], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 243], 00:41:31.707 | 30.00th=[ 275], 40.00th=[ 317], 50.00th=[ 326], 60.00th=[ 342], 00:41:31.707 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 376], 95.00th=[ 409], 00:41:31.707 | 99.00th=[ 447], 99.50th=[ 464], 99.90th=[ 477], 99.95th=[ 477], 00:41:31.707 | 99.99th=[ 477] 00:41:31.707 bw ( KiB/s): min= 128, max= 368, per=3.54%, avg=204.00, stdev=70.96, samples=20 00:41:31.707 iops : min= 32, max= 92, avg=51.00, stdev=17.74, samples=20 00:41:31.707 lat (msec) : 100=2.66%, 250=20.53%, 500=76.81% 00:41:31.707 cpu : usr=98.65%, sys=0.97%, ctx=18, majf=0, minf=37 00:41:31.707 IO depths : 1=3.4%, 2=9.7%, 4=25.1%, 8=52.9%, 16=8.9%, 32=0.0%, >=64=0.0% 00:41:31.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.707 filename1: (groupid=0, jobs=1): err= 0: pid=406700: Wed Nov 20 00:06:04 2024 00:41:31.707 read: IOPS=54, BW=216KiB/s (221kB/s)(2176KiB/10068msec) 00:41:31.707 slat (usec): min=7, max=115, avg=34.40, stdev=13.53 00:41:31.707 clat (msec): min=130, max=375, avg=295.81, stdev=65.79 00:41:31.707 lat (msec): min=130, max=375, avg=295.85, stdev=65.79 00:41:31.707 clat percentiles (msec): 00:41:31.707 | 1.00th=[ 131], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 230], 00:41:31.707 | 30.00th=[ 279], 40.00th=[ 313], 50.00th=[ 321], 60.00th=[ 334], 00:41:31.707 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 368], 00:41:31.707 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:41:31.707 | 99.99th=[ 376] 00:41:31.707 bw ( KiB/s): min= 128, max= 384, per=3.68%, avg=211.20, stdev=75.15, samples=20 00:41:31.707 iops : min= 32, max= 96, avg=52.80, stdev=18.79, samples=20 00:41:31.707 lat (msec) : 250=29.04%, 500=70.96% 00:41:31.707 cpu : usr=98.42%, sys=1.14%, ctx=40, majf=0, minf=25 00:41:31.707 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:41:31.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.707 filename1: (groupid=0, jobs=1): err= 0: pid=406701: Wed Nov 20 00:06:04 2024 00:41:31.707 read: IOPS=73, BW=292KiB/s (299kB/s)(2944KiB/10077msec) 00:41:31.707 slat (nsec): min=4813, max=95010, avg=19233.09, stdev=18744.53 00:41:31.707 clat (msec): min=109, max=336, avg=218.83, stdev=42.62 00:41:31.707 lat (msec): min=110, max=336, avg=218.85, stdev=42.62 00:41:31.707 clat percentiles (msec): 00:41:31.707 | 1.00th=[ 138], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 176], 00:41:31.707 | 30.00th=[ 186], 40.00th=[ 199], 50.00th=[ 215], 60.00th=[ 245], 00:41:31.707 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:41:31.707 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 338], 99.95th=[ 338], 00:41:31.707 | 99.99th=[ 338] 00:41:31.707 bw ( KiB/s): min= 144, max= 384, per=5.02%, avg=288.00, stdev=74.14, samples=20 00:41:31.707 iops : min= 36, max= 96, avg=72.00, stdev=18.54, samples=20 00:41:31.707 lat (msec) : 250=66.03%, 500=33.97% 00:41:31.707 cpu : usr=98.57%, sys=0.98%, ctx=11, majf=0, minf=28 00:41:31.707 IO depths : 1=1.0%, 2=7.2%, 4=25.0%, 8=55.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:41:31.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.707 filename1: (groupid=0, jobs=1): err= 0: pid=406702: Wed Nov 20 00:06:04 2024 00:41:31.707 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10048msec) 00:41:31.707 slat (nsec): min=8218, max=87168, avg=25774.14, stdev=17181.99 00:41:31.707 clat (msec): min=130, max=556, avg=304.19, stdev=77.81 00:41:31.707 lat (msec): min=130, max=556, avg=304.22, stdev=77.80 00:41:31.707 clat percentiles (msec): 00:41:31.707 | 1.00th=[ 136], 5.00th=[ 169], 10.00th=[ 190], 20.00th=[ 226], 00:41:31.707 | 30.00th=[ 279], 40.00th=[ 300], 50.00th=[ 326], 60.00th=[ 334], 00:41:31.707 | 70.00th=[ 342], 80.00th=[ 363], 90.00th=[ 368], 95.00th=[ 426], 00:41:31.707 | 99.00th=[ 514], 99.50th=[ 518], 99.90th=[ 558], 99.95th=[ 558], 00:41:31.707 | 99.99th=[ 558] 00:41:31.707 bw ( KiB/s): min= 128, max= 368, per=3.55%, avg=204.80, stdev=70.15, samples=20 00:41:31.707 iops : min= 32, max= 92, avg=51.20, stdev=17.54, samples=20 00:41:31.707 lat (msec) : 250=23.86%, 500=74.62%, 750=1.52% 00:41:31.707 cpu : usr=98.48%, sys=1.06%, ctx=37, majf=0, minf=42 00:41:31.707 IO depths : 1=3.0%, 2=9.1%, 4=24.4%, 8=54.0%, 16=9.5%, 32=0.0%, >=64=0.0% 00:41:31.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.707 filename2: (groupid=0, jobs=1): err= 0: pid=406703: Wed Nov 20 00:06:04 2024 00:41:31.707 read: IOPS=76, BW=305KiB/s (312kB/s)(3072KiB/10080msec) 00:41:31.707 slat (nsec): min=7908, max=88220, avg=18644.64, stdev=16468.89 00:41:31.707 clat (msec): min=78, max=289, avg=209.77, stdev=53.85 00:41:31.707 lat (msec): min=78, max=289, avg=209.79, stdev=53.85 00:41:31.707 clat percentiles (msec): 00:41:31.707 | 1.00th=[ 80], 5.00th=[ 118], 10.00th=[ 138], 20.00th=[ 163], 00:41:31.707 | 30.00th=[ 174], 40.00th=[ 186], 50.00th=[ 213], 60.00th=[ 249], 00:41:31.707 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 266], 95.00th=[ 279], 00:41:31.707 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 292], 99.95th=[ 292], 00:41:31.707 | 99.99th=[ 292] 00:41:31.707 bw ( KiB/s): min= 144, max= 513, per=5.23%, avg=300.85, stdev=82.48, samples=20 00:41:31.707 iops : min= 36, max= 128, avg=75.20, stdev=20.59, samples=20 00:41:31.707 lat (msec) : 100=4.17%, 250=59.90%, 500=35.94% 00:41:31.707 cpu : usr=98.17%, sys=1.37%, ctx=22, majf=0, minf=36 00:41:31.707 IO depths : 1=2.7%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:41:31.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.707 filename2: (groupid=0, jobs=1): err= 0: pid=406704: Wed Nov 20 00:06:04 2024 00:41:31.707 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10046msec) 00:41:31.707 slat (nsec): min=6571, max=80168, avg=34839.88, stdev=12608.68 00:41:31.707 clat (msec): min=187, max=375, avg=304.11, stdev=56.10 00:41:31.707 lat (msec): min=187, max=375, avg=304.14, stdev=56.10 00:41:31.707 clat percentiles (msec): 00:41:31.707 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 255], 00:41:31.707 | 30.00th=[ 296], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 334], 00:41:31.707 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 368], 00:41:31.707 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:41:31.707 | 99.99th=[ 376] 00:41:31.707 bw ( KiB/s): min= 128, max= 384, per=3.55%, avg=204.80, stdev=76.58, samples=20 00:41:31.707 iops : min= 32, max= 96, avg=51.20, stdev=19.14, samples=20 00:41:31.707 lat (msec) : 250=18.18%, 500=81.82% 00:41:31.707 cpu : usr=98.16%, sys=1.21%, ctx=150, majf=0, minf=23 00:41:31.707 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:31.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.707 filename2: (groupid=0, jobs=1): err= 0: pid=406705: Wed Nov 20 00:06:04 2024 00:41:31.707 read: IOPS=69, BW=279KiB/s (286kB/s)(2816KiB/10077msec) 00:41:31.707 slat (usec): min=6, max=114, avg=28.57, stdev=26.73 00:41:31.707 clat (msec): min=79, max=452, avg=227.53, stdev=54.85 00:41:31.707 lat (msec): min=79, max=452, avg=227.56, stdev=54.86 00:41:31.707 clat percentiles (msec): 00:41:31.707 | 1.00th=[ 81], 5.00th=[ 153], 10.00th=[ 165], 20.00th=[ 182], 00:41:31.707 | 30.00th=[ 192], 40.00th=[ 215], 50.00th=[ 234], 60.00th=[ 251], 00:41:31.707 | 70.00th=[ 264], 80.00th=[ 275], 90.00th=[ 288], 95.00th=[ 300], 00:41:31.707 | 99.00th=[ 317], 99.50th=[ 321], 99.90th=[ 451], 99.95th=[ 451], 00:41:31.707 | 99.99th=[ 451] 00:41:31.707 bw ( KiB/s): min= 144, max= 384, per=4.79%, avg=275.20, stdev=77.45, samples=20 00:41:31.707 iops : min= 36, max= 96, avg=68.80, stdev=19.36, samples=20 00:41:31.707 lat (msec) : 100=4.26%, 250=54.83%, 500=40.91% 00:41:31.707 cpu : usr=98.11%, sys=1.30%, ctx=67, majf=0, minf=33 00:41:31.707 IO depths : 1=0.3%, 2=6.5%, 4=25.0%, 8=56.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:41:31.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.707 filename2: (groupid=0, jobs=1): err= 0: pid=406706: Wed Nov 20 00:06:04 2024 00:41:31.707 read: IOPS=64, BW=256KiB/s (262kB/s)(2584KiB/10081msec) 00:41:31.707 slat (usec): min=8, max=132, avg=40.10, stdev=31.28 00:41:31.707 clat (msec): min=79, max=445, avg=249.18, stdev=67.98 00:41:31.707 lat (msec): min=79, max=445, avg=249.22, stdev=67.99 00:41:31.707 clat percentiles (msec): 00:41:31.707 | 1.00th=[ 80], 5.00th=[ 144], 10.00th=[ 178], 20.00th=[ 194], 00:41:31.707 | 30.00th=[ 211], 40.00th=[ 228], 50.00th=[ 251], 60.00th=[ 266], 00:41:31.707 | 70.00th=[ 288], 80.00th=[ 317], 90.00th=[ 334], 95.00th=[ 359], 00:41:31.707 | 99.00th=[ 372], 99.50th=[ 426], 99.90th=[ 447], 99.95th=[ 447], 00:41:31.707 | 99.99th=[ 447] 00:41:31.707 bw ( KiB/s): min= 128, max= 384, per=4.39%, avg=252.00, stdev=73.39, samples=20 00:41:31.707 iops : min= 32, max= 96, avg=63.00, stdev=18.35, samples=20 00:41:31.707 lat (msec) : 100=4.64%, 250=45.82%, 500=49.54% 00:41:31.707 cpu : usr=97.99%, sys=1.34%, ctx=46, majf=0, minf=36 00:41:31.707 IO depths : 1=0.8%, 2=5.3%, 4=19.7%, 8=62.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:41:31.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 complete : 0=0.0%, 4=92.7%, 8=1.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.707 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.707 filename2: (groupid=0, jobs=1): err= 0: pid=406707: Wed Nov 20 00:06:04 2024 00:41:31.707 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10062msec) 00:41:31.707 slat (nsec): min=8011, max=64717, avg=28913.58, stdev=9067.54 00:41:31.707 clat (msec): min=187, max=426, avg=304.66, stdev=56.30 00:41:31.707 lat (msec): min=187, max=426, avg=304.68, stdev=56.30 00:41:31.707 clat percentiles (msec): 00:41:31.707 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 255], 00:41:31.707 | 30.00th=[ 296], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 334], 00:41:31.707 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 368], 00:41:31.708 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 426], 99.95th=[ 426], 00:41:31.708 | 99.99th=[ 426] 00:41:31.708 bw ( KiB/s): min= 128, max= 256, per=3.55%, avg=204.80, stdev=64.34, samples=20 00:41:31.708 iops : min= 32, max= 64, avg=51.20, stdev=16.08, samples=20 00:41:31.708 lat (msec) : 250=18.18%, 500=81.82% 00:41:31.708 cpu : usr=98.15%, sys=1.23%, ctx=14, majf=0, minf=44 00:41:31.708 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:31.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.708 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.708 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.708 filename2: (groupid=0, jobs=1): err= 0: pid=406708: Wed Nov 20 00:06:04 2024 00:41:31.708 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10058msec) 00:41:31.708 slat (nsec): min=3974, max=65972, avg=35144.34, stdev=11665.20 00:41:31.708 clat (msec): min=187, max=375, avg=304.45, stdev=55.97 00:41:31.708 lat (msec): min=187, max=375, avg=304.49, stdev=55.97 00:41:31.708 clat percentiles (msec): 00:41:31.708 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 255], 00:41:31.708 | 30.00th=[ 296], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 334], 00:41:31.708 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 368], 00:41:31.708 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:41:31.708 | 99.99th=[ 376] 00:41:31.708 bw ( KiB/s): min= 128, max= 256, per=3.55%, avg=204.80, stdev=64.34, samples=20 00:41:31.708 iops : min= 32, max= 64, avg=51.20, stdev=16.08, samples=20 00:41:31.708 lat (msec) : 250=18.18%, 500=81.82% 00:41:31.708 cpu : usr=98.07%, sys=1.43%, ctx=18, majf=0, minf=25 00:41:31.708 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:31.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.708 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.708 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.708 filename2: (groupid=0, jobs=1): err= 0: pid=406709: Wed Nov 20 00:06:04 2024 00:41:31.708 read: IOPS=67, BW=269KiB/s (275kB/s)(2712KiB/10082msec) 00:41:31.708 slat (usec): min=7, max=106, avg=34.46, stdev=30.16 00:41:31.708 clat (msec): min=79, max=474, avg=237.46, stdev=62.08 00:41:31.708 lat (msec): min=79, max=475, avg=237.49, stdev=62.10 00:41:31.708 clat percentiles (msec): 00:41:31.708 | 1.00th=[ 80], 5.00th=[ 140], 10.00th=[ 188], 20.00th=[ 192], 00:41:31.708 | 30.00th=[ 207], 40.00th=[ 213], 50.00th=[ 228], 60.00th=[ 255], 00:41:31.708 | 70.00th=[ 271], 80.00th=[ 292], 90.00th=[ 317], 95.00th=[ 334], 00:41:31.708 | 99.00th=[ 363], 99.50th=[ 393], 99.90th=[ 477], 99.95th=[ 477], 00:41:31.708 | 99.99th=[ 477] 00:41:31.708 bw ( KiB/s): min= 128, max= 384, per=4.60%, avg=264.80, stdev=61.31, samples=20 00:41:31.708 iops : min= 32, max= 96, avg=66.20, stdev=15.33, samples=20 00:41:31.708 lat (msec) : 100=4.72%, 250=54.87%, 500=40.41% 00:41:31.708 cpu : usr=98.26%, sys=1.29%, ctx=19, majf=0, minf=33 00:41:31.708 IO depths : 1=2.4%, 2=5.5%, 4=15.5%, 8=66.5%, 16=10.2%, 32=0.0%, >=64=0.0% 00:41:31.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.708 complete : 0=0.0%, 4=91.3%, 8=3.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.708 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.708 filename2: (groupid=0, jobs=1): err= 0: pid=406710: Wed Nov 20 00:06:04 2024 00:41:31.708 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10049msec) 00:41:31.708 slat (usec): min=8, max=118, avg=41.42, stdev=21.17 00:41:31.708 clat (msec): min=130, max=541, avg=304.09, stdev=69.14 00:41:31.708 lat (msec): min=130, max=541, avg=304.13, stdev=69.14 00:41:31.708 clat percentiles (msec): 00:41:31.708 | 1.00th=[ 161], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 243], 00:41:31.708 | 30.00th=[ 279], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 334], 00:41:31.708 | 70.00th=[ 342], 80.00th=[ 363], 90.00th=[ 368], 95.00th=[ 409], 00:41:31.708 | 99.00th=[ 430], 99.50th=[ 477], 99.90th=[ 542], 99.95th=[ 542], 00:41:31.708 | 99.99th=[ 542] 00:41:31.708 bw ( KiB/s): min= 128, max= 384, per=3.55%, avg=204.80, stdev=74.07, samples=20 00:41:31.708 iops : min= 32, max= 96, avg=51.20, stdev=18.52, samples=20 00:41:31.708 lat (msec) : 250=22.35%, 500=77.27%, 750=0.38% 00:41:31.708 cpu : usr=98.41%, sys=1.06%, ctx=39, majf=0, minf=22 00:41:31.708 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:41:31.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.708 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.708 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:31.708 00:41:31.708 Run status group 0 (all jobs): 00:41:31.708 READ: bw=5738KiB/s (5876kB/s), 209KiB/s-326KiB/s (214kB/s-334kB/s), io=56.6MiB (59.4MB), run=10027-10103msec 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 bdev_null0 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 [2024-11-20 00:06:04.977633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 bdev_null1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.708 00:06:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.708 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:31.709 { 00:41:31.709 "params": { 00:41:31.709 "name": "Nvme$subsystem", 00:41:31.709 "trtype": "$TEST_TRANSPORT", 00:41:31.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:31.709 "adrfam": "ipv4", 00:41:31.709 "trsvcid": "$NVMF_PORT", 00:41:31.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:31.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:31.709 "hdgst": ${hdgst:-false}, 00:41:31.709 "ddgst": ${ddgst:-false} 00:41:31.709 }, 00:41:31.709 "method": "bdev_nvme_attach_controller" 00:41:31.709 } 00:41:31.709 EOF 00:41:31.709 )") 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:31.709 { 00:41:31.709 "params": { 00:41:31.709 "name": "Nvme$subsystem", 00:41:31.709 "trtype": "$TEST_TRANSPORT", 00:41:31.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:31.709 "adrfam": "ipv4", 00:41:31.709 "trsvcid": "$NVMF_PORT", 00:41:31.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:31.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:31.709 "hdgst": ${hdgst:-false}, 00:41:31.709 "ddgst": ${ddgst:-false} 00:41:31.709 }, 00:41:31.709 "method": "bdev_nvme_attach_controller" 00:41:31.709 } 00:41:31.709 EOF 00:41:31.709 )") 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:31.709 "params": { 00:41:31.709 "name": "Nvme0", 00:41:31.709 "trtype": "tcp", 00:41:31.709 "traddr": "10.0.0.2", 00:41:31.709 "adrfam": "ipv4", 00:41:31.709 "trsvcid": "4420", 00:41:31.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:31.709 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:31.709 "hdgst": false, 00:41:31.709 "ddgst": false 00:41:31.709 }, 00:41:31.709 "method": "bdev_nvme_attach_controller" 00:41:31.709 },{ 00:41:31.709 "params": { 00:41:31.709 "name": "Nvme1", 00:41:31.709 "trtype": "tcp", 00:41:31.709 "traddr": "10.0.0.2", 00:41:31.709 "adrfam": "ipv4", 00:41:31.709 "trsvcid": "4420", 00:41:31.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:31.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:31.709 "hdgst": false, 00:41:31.709 "ddgst": false 00:41:31.709 }, 00:41:31.709 "method": "bdev_nvme_attach_controller" 00:41:31.709 }' 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:31.709 00:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:31.709 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:31.709 ... 00:41:31.709 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:31.709 ... 00:41:31.709 fio-3.35 00:41:31.709 Starting 4 threads 00:41:36.980 00:41:36.980 filename0: (groupid=0, jobs=1): err= 0: pid=408195: Wed Nov 20 00:06:11 2024 00:41:36.980 read: IOPS=1931, BW=15.1MiB/s (15.8MB/s)(75.5MiB/5003msec) 00:41:36.980 slat (nsec): min=4138, max=47572, avg=14442.34, stdev=5332.33 00:41:36.980 clat (usec): min=779, max=7495, avg=4089.55, stdev=391.10 00:41:36.980 lat (usec): min=793, max=7510, avg=4104.00, stdev=391.08 00:41:36.980 clat percentiles (usec): 00:41:36.980 | 1.00th=[ 3228], 5.00th=[ 3687], 10.00th=[ 3851], 20.00th=[ 3982], 00:41:36.980 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:41:36.980 | 70.00th=[ 4146], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4424], 00:41:36.980 | 99.00th=[ 6063], 99.50th=[ 6783], 99.90th=[ 7177], 99.95th=[ 7308], 00:41:36.980 | 99.99th=[ 7504] 00:41:36.980 bw ( KiB/s): min=15120, max=15744, per=24.99%, avg=15460.80, stdev=198.13, samples=10 00:41:36.980 iops : min= 1890, max= 1968, avg=1932.60, stdev=24.77, samples=10 00:41:36.980 lat (usec) : 1000=0.09% 00:41:36.980 lat (msec) : 2=0.08%, 4=24.32%, 10=75.51% 00:41:36.980 cpu : usr=90.62%, sys=7.26%, ctx=223, majf=0, minf=0 00:41:36.980 IO depths : 1=0.4%, 2=15.2%, 4=58.1%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.980 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.980 issued rwts: total=9664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.980 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:36.980 filename0: (groupid=0, jobs=1): err= 0: pid=408196: Wed Nov 20 00:06:11 2024 00:41:36.980 read: IOPS=1932, BW=15.1MiB/s (15.8MB/s)(75.5MiB/5001msec) 00:41:36.980 slat (nsec): min=4693, max=53420, avg=15206.75, stdev=5279.89 00:41:36.980 clat (usec): min=739, max=7737, avg=4075.89, stdev=511.80 00:41:36.980 lat (usec): min=753, max=7745, avg=4091.09, stdev=511.90 00:41:36.980 clat percentiles (usec): 00:41:36.980 | 1.00th=[ 1991], 5.00th=[ 3621], 10.00th=[ 3884], 20.00th=[ 3982], 00:41:36.980 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4080], 00:41:36.980 | 70.00th=[ 4113], 80.00th=[ 4146], 90.00th=[ 4228], 95.00th=[ 4424], 00:41:36.980 | 99.00th=[ 6587], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[ 7570], 00:41:36.980 | 99.99th=[ 7767] 00:41:36.980 bw ( KiB/s): min=15248, max=15856, per=25.05%, avg=15496.89, stdev=187.64, samples=9 00:41:36.980 iops : min= 1906, max= 1982, avg=1937.11, stdev=23.45, samples=9 00:41:36.980 lat (usec) : 750=0.01%, 1000=0.26% 00:41:36.980 lat (msec) : 2=0.73%, 4=25.70%, 10=73.30% 00:41:36.980 cpu : usr=87.42%, sys=8.18%, ctx=293, majf=0, minf=9 00:41:36.980 IO depths : 1=0.7%, 2=21.8%, 4=52.1%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.980 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.980 issued rwts: total=9666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.980 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:36.980 filename1: (groupid=0, jobs=1): err= 0: pid=408197: Wed Nov 20 00:06:11 2024 00:41:36.980 read: IOPS=1939, BW=15.2MiB/s (15.9MB/s)(75.8MiB/5001msec) 00:41:36.980 slat (usec): min=4, max=221, avg=14.66, stdev= 4.24 00:41:36.980 clat (usec): min=794, max=7353, avg=4066.76, stdev=451.86 00:41:36.980 lat (usec): min=807, max=7371, avg=4081.42, stdev=452.01 00:41:36.980 clat percentiles (usec): 00:41:36.980 | 1.00th=[ 2311], 5.00th=[ 3589], 10.00th=[ 3851], 20.00th=[ 3982], 00:41:36.980 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:41:36.980 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4424], 00:41:36.981 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 7111], 99.95th=[ 7177], 00:41:36.981 | 99.99th=[ 7373] 00:41:36.981 bw ( KiB/s): min=15040, max=15856, per=25.07%, avg=15510.30, stdev=251.65, samples=10 00:41:36.981 iops : min= 1880, max= 1982, avg=1938.70, stdev=31.35, samples=10 00:41:36.981 lat (usec) : 1000=0.08% 00:41:36.981 lat (msec) : 2=0.74%, 4=25.26%, 10=73.92% 00:41:36.981 cpu : usr=94.58%, sys=4.92%, ctx=7, majf=0, minf=0 00:41:36.981 IO depths : 1=0.2%, 2=22.8%, 4=51.4%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.981 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.981 issued rwts: total=9700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.981 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:36.981 filename1: (groupid=0, jobs=1): err= 0: pid=408198: Wed Nov 20 00:06:11 2024 00:41:36.981 read: IOPS=1930, BW=15.1MiB/s (15.8MB/s)(75.5MiB/5002msec) 00:41:36.981 slat (nsec): min=4110, max=45423, avg=14584.92, stdev=4733.31 00:41:36.981 clat (usec): min=762, max=7747, avg=4086.81, stdev=423.06 00:41:36.981 lat (usec): min=775, max=7755, avg=4101.40, stdev=422.96 00:41:36.981 clat percentiles (usec): 00:41:36.981 | 1.00th=[ 3064], 5.00th=[ 3654], 10.00th=[ 3884], 20.00th=[ 3982], 00:41:36.981 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:41:36.981 | 70.00th=[ 4146], 80.00th=[ 4178], 90.00th=[ 4228], 95.00th=[ 4424], 00:41:36.981 | 99.00th=[ 5932], 99.50th=[ 6652], 99.90th=[ 7177], 99.95th=[ 7439], 00:41:36.981 | 99.99th=[ 7767] 00:41:36.981 bw ( KiB/s): min=14960, max=15744, per=24.96%, avg=15443.10, stdev=228.94, samples=10 00:41:36.981 iops : min= 1870, max= 1968, avg=1930.30, stdev=28.57, samples=10 00:41:36.981 lat (usec) : 1000=0.05% 00:41:36.981 lat (msec) : 2=0.43%, 4=24.40%, 10=75.11% 00:41:36.981 cpu : usr=91.82%, sys=6.64%, ctx=273, majf=0, minf=0 00:41:36.981 IO depths : 1=0.4%, 2=22.0%, 4=52.1%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.981 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.981 issued rwts: total=9658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.981 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:36.981 00:41:36.981 Run status group 0 (all jobs): 00:41:36.981 READ: bw=60.4MiB/s (63.3MB/s), 15.1MiB/s-15.2MiB/s (15.8MB/s-15.9MB/s), io=302MiB (317MB), run=5001-5003msec 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.981 00:41:36.981 real 0m24.327s 00:41:36.981 user 4m34.361s 00:41:36.981 sys 0m6.117s 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:36.981 00:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.981 ************************************ 00:41:36.981 END TEST fio_dif_rand_params 00:41:36.981 ************************************ 00:41:37.242 00:06:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:37.242 00:06:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:37.242 00:06:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:37.242 00:06:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:37.242 ************************************ 00:41:37.242 START TEST fio_dif_digest 00:41:37.242 ************************************ 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:37.242 bdev_null0 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:37.242 [2024-11-20 00:06:11.350932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:37.242 { 00:41:37.242 "params": { 00:41:37.242 "name": "Nvme$subsystem", 00:41:37.242 "trtype": "$TEST_TRANSPORT", 00:41:37.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:37.242 "adrfam": "ipv4", 00:41:37.242 "trsvcid": "$NVMF_PORT", 00:41:37.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:37.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:37.242 "hdgst": ${hdgst:-false}, 00:41:37.242 "ddgst": ${ddgst:-false} 00:41:37.242 }, 00:41:37.242 "method": "bdev_nvme_attach_controller" 00:41:37.242 } 00:41:37.242 EOF 00:41:37.242 )") 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:37.242 "params": { 00:41:37.242 "name": "Nvme0", 00:41:37.242 "trtype": "tcp", 00:41:37.242 "traddr": "10.0.0.2", 00:41:37.242 "adrfam": "ipv4", 00:41:37.242 "trsvcid": "4420", 00:41:37.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:37.242 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:37.242 "hdgst": true, 00:41:37.242 "ddgst": true 00:41:37.242 }, 00:41:37.242 "method": "bdev_nvme_attach_controller" 00:41:37.242 }' 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:37.242 00:06:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:37.503 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:37.503 ... 00:41:37.503 fio-3.35 00:41:37.503 Starting 3 threads 00:41:49.785 00:41:49.785 filename0: (groupid=0, jobs=1): err= 0: pid=409460: Wed Nov 20 00:06:22 2024 00:41:49.785 read: IOPS=202, BW=25.4MiB/s (26.6MB/s)(255MiB/10048msec) 00:41:49.785 slat (usec): min=7, max=108, avg=15.60, stdev= 5.01 00:41:49.785 clat (usec): min=8901, max=54104, avg=14743.70, stdev=1581.85 00:41:49.785 lat (usec): min=8920, max=54119, avg=14759.30, stdev=1581.71 00:41:49.785 clat percentiles (usec): 00:41:49.785 | 1.00th=[12125], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:41:49.785 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:41:49.785 | 70.00th=[15270], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:41:49.785 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19530], 99.95th=[49546], 00:41:49.785 | 99.99th=[54264] 00:41:49.785 bw ( KiB/s): min=24576, max=27136, per=33.08%, avg=26063.35, stdev=606.15, samples=20 00:41:49.785 iops : min= 192, max= 212, avg=203.60, stdev= 4.75, samples=20 00:41:49.785 lat (msec) : 10=0.34%, 20=99.56%, 50=0.05%, 100=0.05% 00:41:49.785 cpu : usr=93.39%, sys=6.11%, ctx=25, majf=0, minf=242 00:41:49.785 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.785 issued rwts: total=2039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.785 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:49.785 filename0: (groupid=0, jobs=1): err= 0: pid=409461: Wed Nov 20 00:06:22 2024 00:41:49.785 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(265MiB/10049msec) 00:41:49.785 slat (nsec): min=7309, max=61079, avg=18732.97, stdev=5743.07 00:41:49.785 clat (usec): min=9220, max=52809, avg=14197.23, stdev=1577.53 00:41:49.785 lat (usec): min=9244, max=52824, avg=14215.96, stdev=1577.11 00:41:49.785 clat percentiles (usec): 00:41:49.785 | 1.00th=[11731], 5.00th=[12387], 10.00th=[12911], 20.00th=[13304], 00:41:49.785 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:41:49.785 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:41:49.785 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19268], 99.95th=[49021], 00:41:49.785 | 99.99th=[52691] 00:41:49.785 bw ( KiB/s): min=25088, max=28416, per=34.35%, avg=27059.20, stdev=826.83, samples=20 00:41:49.785 iops : min= 196, max= 222, avg=211.40, stdev= 6.46, samples=20 00:41:49.785 lat (msec) : 10=0.33%, 20=99.57%, 50=0.05%, 100=0.05% 00:41:49.785 cpu : usr=93.33%, sys=6.12%, ctx=19, majf=0, minf=77 00:41:49.785 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.785 issued rwts: total=2117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.785 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:49.785 filename0: (groupid=0, jobs=1): err= 0: pid=409462: Wed Nov 20 00:06:22 2024 00:41:49.785 read: IOPS=201, BW=25.2MiB/s (26.5MB/s)(254MiB/10046msec) 00:41:49.785 slat (nsec): min=5213, max=44898, avg=15390.89, stdev=4475.90 00:41:49.785 clat (usec): min=10624, max=56259, avg=14814.24, stdev=2144.49 00:41:49.785 lat (usec): min=10658, max=56279, avg=14829.63, stdev=2144.42 00:41:49.785 clat percentiles (usec): 00:41:49.785 | 1.00th=[12518], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:41:49.785 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:41:49.785 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[16450], 00:41:49.785 | 99.00th=[17433], 99.50th=[18220], 99.90th=[54264], 99.95th=[55313], 00:41:49.785 | 99.99th=[56361] 00:41:49.785 bw ( KiB/s): min=24320, max=27392, per=32.93%, avg=25945.60, stdev=655.44, samples=20 00:41:49.785 iops : min= 190, max= 214, avg=202.70, stdev= 5.12, samples=20 00:41:49.785 lat (msec) : 20=99.70%, 50=0.10%, 100=0.20% 00:41:49.785 cpu : usr=93.32%, sys=6.18%, ctx=28, majf=0, minf=174 00:41:49.785 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.785 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.785 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:49.785 00:41:49.785 Run status group 0 (all jobs): 00:41:49.785 READ: bw=76.9MiB/s (80.7MB/s), 25.2MiB/s-26.3MiB/s (26.5MB/s-27.6MB/s), io=773MiB (811MB), run=10046-10049msec 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.785 00:41:49.785 real 0m11.165s 00:41:49.785 user 0m29.210s 00:41:49.785 sys 0m2.137s 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:49.785 00:06:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:49.785 ************************************ 00:41:49.785 END TEST fio_dif_digest 00:41:49.785 ************************************ 00:41:49.785 00:06:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:49.785 00:06:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:49.785 00:06:22 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:49.785 00:06:22 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:49.785 00:06:22 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:49.785 00:06:22 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:49.785 00:06:22 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:49.785 00:06:22 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:49.785 rmmod nvme_tcp 00:41:49.785 rmmod nvme_fabrics 00:41:49.785 rmmod nvme_keyring 00:41:49.785 00:06:22 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:49.785 00:06:22 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:49.785 00:06:22 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:49.785 00:06:22 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 402799 ']' 00:41:49.785 00:06:22 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 402799 00:41:49.785 00:06:22 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 402799 ']' 00:41:49.785 00:06:22 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 402799 00:41:49.785 00:06:22 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:41:49.785 00:06:22 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:49.785 00:06:22 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 402799 00:41:49.785 00:06:22 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:49.785 00:06:22 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:49.785 00:06:22 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 402799' 00:41:49.785 killing process with pid 402799 00:41:49.785 00:06:22 nvmf_dif -- common/autotest_common.sh@973 -- # kill 402799 00:41:49.785 00:06:22 nvmf_dif -- common/autotest_common.sh@978 -- # wait 402799 00:41:49.786 00:06:22 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:49.786 00:06:22 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:49.786 Waiting for block devices as requested 00:41:49.786 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:49.786 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:50.047 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:50.047 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:50.047 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:50.306 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:50.306 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:50.306 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:50.306 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:50.307 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:50.566 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:50.566 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:50.566 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:50.566 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:50.824 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:50.824 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:50.824 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:51.083 00:06:25 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:51.083 00:06:25 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:51.083 00:06:25 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:51.083 00:06:25 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:41:51.083 00:06:25 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:51.083 00:06:25 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:41:51.083 00:06:25 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:51.083 00:06:25 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:51.083 00:06:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:51.083 00:06:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:51.083 00:06:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:52.992 00:06:27 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:52.992 00:41:52.992 real 1m6.913s 00:41:52.992 user 6m31.767s 00:41:52.992 sys 0m17.168s 00:41:52.992 00:06:27 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:52.992 00:06:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:52.992 ************************************ 00:41:52.992 END TEST nvmf_dif 00:41:52.992 ************************************ 00:41:52.992 00:06:27 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:52.992 00:06:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:52.992 00:06:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:52.992 00:06:27 -- common/autotest_common.sh@10 -- # set +x 00:41:52.992 ************************************ 00:41:52.992 START TEST nvmf_abort_qd_sizes 00:41:52.992 ************************************ 00:41:52.992 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:53.252 * Looking for test storage... 00:41:53.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:53.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.252 --rc genhtml_branch_coverage=1 00:41:53.252 --rc genhtml_function_coverage=1 00:41:53.252 --rc genhtml_legend=1 00:41:53.252 --rc geninfo_all_blocks=1 00:41:53.252 --rc geninfo_unexecuted_blocks=1 00:41:53.252 00:41:53.252 ' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:53.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.252 --rc genhtml_branch_coverage=1 00:41:53.252 --rc genhtml_function_coverage=1 00:41:53.252 --rc genhtml_legend=1 00:41:53.252 --rc geninfo_all_blocks=1 00:41:53.252 --rc geninfo_unexecuted_blocks=1 00:41:53.252 00:41:53.252 ' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:53.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.252 --rc genhtml_branch_coverage=1 00:41:53.252 --rc genhtml_function_coverage=1 00:41:53.252 --rc genhtml_legend=1 00:41:53.252 --rc geninfo_all_blocks=1 00:41:53.252 --rc geninfo_unexecuted_blocks=1 00:41:53.252 00:41:53.252 ' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:53.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:53.252 --rc genhtml_branch_coverage=1 00:41:53.252 --rc genhtml_function_coverage=1 00:41:53.252 --rc genhtml_legend=1 00:41:53.252 --rc geninfo_all_blocks=1 00:41:53.252 --rc geninfo_unexecuted_blocks=1 00:41:53.252 00:41:53.252 ' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:53.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:53.252 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:53.253 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:53.253 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:53.253 00:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:53.253 00:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:55.155 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:55.155 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:55.155 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:55.155 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:55.155 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:55.155 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:55.155 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:55.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:55.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:55.156 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:55.156 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:55.156 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:55.413 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:55.413 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:55.414 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:55.414 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:55.414 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:55.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:55.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:41:55.414 00:41:55.414 --- 10.0.0.2 ping statistics --- 00:41:55.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:55.414 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:41:55.414 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:55.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:55.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:41:55.414 00:41:55.414 --- 10.0.0.1 ping statistics --- 00:41:55.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:55.414 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:41:55.414 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:55.414 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:41:55.414 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:55.414 00:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:56.348 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:56.348 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:56.348 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:56.348 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:56.348 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:56.348 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:56.348 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:56.608 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:56.608 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:56.608 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:56.608 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:56.608 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:56.608 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:56.608 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:56.608 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:56.608 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:57.545 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=414368 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 414368 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 414368 ']' 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:57.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:57.545 00:06:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:57.545 [2024-11-20 00:06:31.816545] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:41:57.545 [2024-11-20 00:06:31.816638] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:57.804 [2024-11-20 00:06:31.899509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:57.804 [2024-11-20 00:06:31.950163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:57.804 [2024-11-20 00:06:31.950230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:57.804 [2024-11-20 00:06:31.950252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:57.804 [2024-11-20 00:06:31.950266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:57.804 [2024-11-20 00:06:31.950277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:57.804 [2024-11-20 00:06:31.951959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:57.804 [2024-11-20 00:06:31.952030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:57.804 [2024-11-20 00:06:31.952126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:57.804 [2024-11-20 00:06:31.952129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:57.804 00:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:57.804 ************************************ 00:41:57.804 START TEST spdk_target_abort 00:41:57.804 ************************************ 00:41:57.804 00:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:41:57.804 00:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:57.804 00:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:41:57.804 00:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.804 00:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:01.088 spdk_targetn1 00:42:01.088 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.088 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:01.088 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.088 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:01.088 [2024-11-20 00:06:34.950019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:01.089 [2024-11-20 00:06:34.994659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:01.089 00:06:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:04.382 Initializing NVMe Controllers 00:42:04.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:04.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:04.382 Initialization complete. Launching workers. 00:42:04.382 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13455, failed: 0 00:42:04.382 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1239, failed to submit 12216 00:42:04.382 success 765, unsuccessful 474, failed 0 00:42:04.382 00:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:04.382 00:06:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:07.671 Initializing NVMe Controllers 00:42:07.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:07.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:07.671 Initialization complete. Launching workers. 00:42:07.671 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8598, failed: 0 00:42:07.671 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1237, failed to submit 7361 00:42:07.671 success 327, unsuccessful 910, failed 0 00:42:07.671 00:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:07.671 00:06:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:10.973 Initializing NVMe Controllers 00:42:10.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:10.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:10.973 Initialization complete. Launching workers. 00:42:10.973 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31248, failed: 0 00:42:10.973 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2694, failed to submit 28554 00:42:10.973 success 514, unsuccessful 2180, failed 0 00:42:10.973 00:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:10.973 00:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.973 00:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:10.973 00:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.973 00:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:10.973 00:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.973 00:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:11.909 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:11.909 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 414368 00:42:11.909 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 414368 ']' 00:42:11.909 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 414368 00:42:11.909 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:42:11.909 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:11.909 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 414368 00:42:11.909 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:11.909 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:11.909 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 414368' 00:42:11.909 killing process with pid 414368 00:42:11.909 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 414368 00:42:11.909 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 414368 00:42:12.168 00:42:12.168 real 0m14.251s 00:42:12.168 user 0m54.031s 00:42:12.168 sys 0m2.501s 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:12.168 ************************************ 00:42:12.168 END TEST spdk_target_abort 00:42:12.168 ************************************ 00:42:12.168 00:06:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:12.168 00:06:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:12.168 00:06:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:12.168 00:06:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:12.168 ************************************ 00:42:12.168 START TEST kernel_target_abort 00:42:12.168 ************************************ 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:12.168 00:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:13.546 Waiting for block devices as requested 00:42:13.546 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:13.546 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:13.546 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:13.546 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:13.804 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:13.804 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:13.804 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:13.804 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:13.804 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:14.063 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:14.063 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:14.063 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:14.063 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:14.321 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:14.321 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:14.321 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:14.321 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:14.579 No valid GPT data, bailing 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:14.579 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:14.580 00:42:14.580 Discovery Log Number of Records 2, Generation counter 2 00:42:14.580 =====Discovery Log Entry 0====== 00:42:14.580 trtype: tcp 00:42:14.580 adrfam: ipv4 00:42:14.580 subtype: current discovery subsystem 00:42:14.580 treq: not specified, sq flow control disable supported 00:42:14.580 portid: 1 00:42:14.580 trsvcid: 4420 00:42:14.580 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:14.580 traddr: 10.0.0.1 00:42:14.580 eflags: none 00:42:14.580 sectype: none 00:42:14.580 =====Discovery Log Entry 1====== 00:42:14.580 trtype: tcp 00:42:14.580 adrfam: ipv4 00:42:14.580 subtype: nvme subsystem 00:42:14.580 treq: not specified, sq flow control disable supported 00:42:14.580 portid: 1 00:42:14.580 trsvcid: 4420 00:42:14.580 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:14.580 traddr: 10.0.0.1 00:42:14.580 eflags: none 00:42:14.580 sectype: none 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:14.580 00:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:17.872 Initializing NVMe Controllers 00:42:17.872 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:17.872 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:17.872 Initialization complete. Launching workers. 00:42:17.872 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 45804, failed: 0 00:42:17.872 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 45804, failed to submit 0 00:42:17.872 success 0, unsuccessful 45804, failed 0 00:42:17.872 00:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:17.872 00:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:21.171 Initializing NVMe Controllers 00:42:21.171 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:21.171 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:21.171 Initialization complete. Launching workers. 00:42:21.171 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89330, failed: 0 00:42:21.171 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22522, failed to submit 66808 00:42:21.171 success 0, unsuccessful 22522, failed 0 00:42:21.171 00:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:21.171 00:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:24.463 Initializing NVMe Controllers 00:42:24.463 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:24.463 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:24.463 Initialization complete. Launching workers. 00:42:24.464 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81859, failed: 0 00:42:24.464 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20442, failed to submit 61417 00:42:24.464 success 0, unsuccessful 20442, failed 0 00:42:24.464 00:06:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:24.464 00:06:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:24.464 00:06:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:24.464 00:06:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:24.464 00:06:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:24.464 00:06:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:24.464 00:06:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:24.464 00:06:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:24.464 00:06:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:24.464 00:06:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:25.031 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:25.290 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:25.290 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:25.290 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:25.290 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:25.290 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:25.290 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:25.290 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:25.290 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:25.290 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:25.290 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:25.290 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:25.290 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:25.290 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:25.290 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:25.290 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:26.225 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:26.225 00:42:26.225 real 0m14.092s 00:42:26.225 user 0m6.351s 00:42:26.225 sys 0m3.222s 00:42:26.225 00:07:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:26.225 00:07:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:26.225 ************************************ 00:42:26.225 END TEST kernel_target_abort 00:42:26.225 ************************************ 00:42:26.225 00:07:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:26.225 00:07:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:26.225 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:26.225 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:26.225 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:26.225 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:26.225 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:26.225 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:26.225 rmmod nvme_tcp 00:42:26.485 rmmod nvme_fabrics 00:42:26.485 rmmod nvme_keyring 00:42:26.485 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:26.485 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:26.485 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:26.485 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 414368 ']' 00:42:26.485 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 414368 00:42:26.485 00:07:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 414368 ']' 00:42:26.485 00:07:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 414368 00:42:26.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (414368) - No such process 00:42:26.485 00:07:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 414368 is not found' 00:42:26.485 Process with pid 414368 is not found 00:42:26.485 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:26.485 00:07:00 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:27.541 Waiting for block devices as requested 00:42:27.541 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:27.822 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:27.822 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:27.822 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:27.822 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:28.082 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:28.082 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:28.082 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:28.082 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:28.082 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:28.341 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:28.341 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:28.341 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:28.601 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:28.601 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:28.601 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:28.601 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:28.862 00:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:28.862 00:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:28.862 00:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:28.862 00:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:28.862 00:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:28.862 00:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:28.862 00:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:28.862 00:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:28.862 00:07:02 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:28.862 00:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:28.862 00:07:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:30.773 00:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:30.774 00:42:30.774 real 0m37.780s 00:42:30.774 user 1m2.497s 00:42:30.774 sys 0m9.221s 00:42:30.774 00:07:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:30.774 00:07:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:30.774 ************************************ 00:42:30.774 END TEST nvmf_abort_qd_sizes 00:42:30.774 ************************************ 00:42:30.774 00:07:05 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:30.774 00:07:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:30.774 00:07:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:30.774 00:07:05 -- common/autotest_common.sh@10 -- # set +x 00:42:30.774 ************************************ 00:42:30.774 START TEST keyring_file 00:42:30.774 ************************************ 00:42:30.774 00:07:05 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:31.033 * Looking for test storage... 00:42:31.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:31.033 00:07:05 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:31.033 00:07:05 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:42:31.033 00:07:05 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:31.033 00:07:05 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:31.033 00:07:05 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:31.033 00:07:05 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:31.033 00:07:05 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:31.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.033 --rc genhtml_branch_coverage=1 00:42:31.033 --rc genhtml_function_coverage=1 00:42:31.033 --rc genhtml_legend=1 00:42:31.033 --rc geninfo_all_blocks=1 00:42:31.033 --rc geninfo_unexecuted_blocks=1 00:42:31.033 00:42:31.033 ' 00:42:31.033 00:07:05 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:31.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.033 --rc genhtml_branch_coverage=1 00:42:31.033 --rc genhtml_function_coverage=1 00:42:31.033 --rc genhtml_legend=1 00:42:31.033 --rc geninfo_all_blocks=1 00:42:31.033 --rc geninfo_unexecuted_blocks=1 00:42:31.033 00:42:31.033 ' 00:42:31.033 00:07:05 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:31.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.033 --rc genhtml_branch_coverage=1 00:42:31.033 --rc genhtml_function_coverage=1 00:42:31.033 --rc genhtml_legend=1 00:42:31.033 --rc geninfo_all_blocks=1 00:42:31.033 --rc geninfo_unexecuted_blocks=1 00:42:31.033 00:42:31.033 ' 00:42:31.033 00:07:05 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:31.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.033 --rc genhtml_branch_coverage=1 00:42:31.033 --rc genhtml_function_coverage=1 00:42:31.033 --rc genhtml_legend=1 00:42:31.033 --rc geninfo_all_blocks=1 00:42:31.033 --rc geninfo_unexecuted_blocks=1 00:42:31.033 00:42:31.033 ' 00:42:31.033 00:07:05 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:31.033 00:07:05 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:31.033 00:07:05 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:31.033 00:07:05 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:31.033 00:07:05 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:31.033 00:07:05 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:31.033 00:07:05 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:31.033 00:07:05 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:31.033 00:07:05 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:31.033 00:07:05 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:31.034 00:07:05 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:31.034 00:07:05 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:31.034 00:07:05 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:31.034 00:07:05 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:31.034 00:07:05 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.034 00:07:05 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.034 00:07:05 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.034 00:07:05 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:31.034 00:07:05 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:31.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:31.034 00:07:05 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:31.034 00:07:05 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:31.034 00:07:05 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:31.034 00:07:05 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:31.034 00:07:05 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:31.034 00:07:05 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kMvsAGbxX3 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kMvsAGbxX3 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kMvsAGbxX3 00:42:31.034 00:07:05 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.kMvsAGbxX3 00:42:31.034 00:07:05 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yfqLl5Ud1Z 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:31.034 00:07:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yfqLl5Ud1Z 00:42:31.034 00:07:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yfqLl5Ud1Z 00:42:31.034 00:07:05 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yfqLl5Ud1Z 00:42:31.034 00:07:05 keyring_file -- keyring/file.sh@30 -- # tgtpid=420133 00:42:31.034 00:07:05 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:31.034 00:07:05 keyring_file -- keyring/file.sh@32 -- # waitforlisten 420133 00:42:31.034 00:07:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 420133 ']' 00:42:31.034 00:07:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:31.034 00:07:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:31.034 00:07:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:31.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:31.034 00:07:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:31.034 00:07:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:31.293 [2024-11-20 00:07:05.369630] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:42:31.293 [2024-11-20 00:07:05.369711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420133 ] 00:42:31.293 [2024-11-20 00:07:05.445884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:31.293 [2024-11-20 00:07:05.494812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:31.551 00:07:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:31.551 [2024-11-20 00:07:05.756652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:31.551 null0 00:42:31.551 [2024-11-20 00:07:05.788711] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:31.551 [2024-11-20 00:07:05.789278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.551 00:07:05 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:31.551 [2024-11-20 00:07:05.820774] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:31.551 request: 00:42:31.551 { 00:42:31.551 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:31.551 "secure_channel": false, 00:42:31.551 "listen_address": { 00:42:31.551 "trtype": "tcp", 00:42:31.551 "traddr": "127.0.0.1", 00:42:31.551 "trsvcid": "4420" 00:42:31.551 }, 00:42:31.551 "method": "nvmf_subsystem_add_listener", 00:42:31.551 "req_id": 1 00:42:31.551 } 00:42:31.551 Got JSON-RPC error response 00:42:31.551 response: 00:42:31.551 { 00:42:31.551 "code": -32602, 00:42:31.551 "message": "Invalid parameters" 00:42:31.551 } 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:31.551 00:07:05 keyring_file -- keyring/file.sh@47 -- # bperfpid=420141 00:42:31.551 00:07:05 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:31.551 00:07:05 keyring_file -- keyring/file.sh@49 -- # waitforlisten 420141 /var/tmp/bperf.sock 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 420141 ']' 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:31.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:31.551 00:07:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:31.810 [2024-11-20 00:07:05.872030] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:42:31.810 [2024-11-20 00:07:05.872135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420141 ] 00:42:31.810 [2024-11-20 00:07:05.942600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:31.810 [2024-11-20 00:07:05.991424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:31.810 00:07:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:31.810 00:07:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:31.810 00:07:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kMvsAGbxX3 00:42:31.810 00:07:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kMvsAGbxX3 00:42:32.380 00:07:06 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yfqLl5Ud1Z 00:42:32.380 00:07:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yfqLl5Ud1Z 00:42:32.380 00:07:06 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:32.380 00:07:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:32.380 00:07:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:32.380 00:07:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:32.380 00:07:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:32.639 00:07:06 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.kMvsAGbxX3 == \/\t\m\p\/\t\m\p\.\k\M\v\s\A\G\b\x\X\3 ]] 00:42:32.639 00:07:06 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:32.639 00:07:06 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:32.639 00:07:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:32.639 00:07:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:32.639 00:07:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:33.205 00:07:07 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.yfqLl5Ud1Z == \/\t\m\p\/\t\m\p\.\y\f\q\L\l\5\U\d\1\Z ]] 00:42:33.205 00:07:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:33.205 00:07:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:33.205 00:07:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:33.205 00:07:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:33.205 00:07:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:33.205 00:07:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:33.205 00:07:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:33.205 00:07:07 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:33.205 00:07:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:33.205 00:07:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:33.205 00:07:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:33.205 00:07:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:33.205 00:07:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:33.474 00:07:07 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:33.474 00:07:07 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:33.474 00:07:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:33.737 [2024-11-20 00:07:08.033211] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:33.997 nvme0n1 00:42:33.997 00:07:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:33.997 00:07:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:33.997 00:07:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:33.997 00:07:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:33.997 00:07:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:33.997 00:07:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:34.255 00:07:08 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:34.255 00:07:08 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:34.255 00:07:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:34.255 00:07:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:34.255 00:07:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:34.255 00:07:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:34.255 00:07:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:34.513 00:07:08 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:34.513 00:07:08 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:34.513 Running I/O for 1 seconds... 00:42:35.907 8373.00 IOPS, 32.71 MiB/s 00:42:35.907 Latency(us) 00:42:35.907 [2024-11-19T23:07:10.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:35.907 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:35.907 nvme0n1 : 1.01 8423.44 32.90 0.00 0.00 15141.04 4466.16 21651.15 00:42:35.907 [2024-11-19T23:07:10.219Z] =================================================================================================================== 00:42:35.907 [2024-11-19T23:07:10.219Z] Total : 8423.44 32.90 0.00 0.00 15141.04 4466.16 21651.15 00:42:35.907 { 00:42:35.907 "results": [ 00:42:35.907 { 00:42:35.907 "job": "nvme0n1", 00:42:35.907 "core_mask": "0x2", 00:42:35.907 "workload": "randrw", 00:42:35.907 "percentage": 50, 00:42:35.907 "status": "finished", 00:42:35.907 "queue_depth": 128, 00:42:35.907 "io_size": 4096, 00:42:35.907 "runtime": 1.009326, 00:42:35.907 "iops": 8423.442970853817, 00:42:35.907 "mibps": 32.904074104897724, 00:42:35.907 "io_failed": 0, 00:42:35.907 "io_timeout": 0, 00:42:35.907 "avg_latency_us": 15141.040273922474, 00:42:35.907 "min_latency_us": 4466.157037037037, 00:42:35.907 "max_latency_us": 21651.152592592593 00:42:35.907 } 00:42:35.907 ], 00:42:35.907 "core_count": 1 00:42:35.907 } 00:42:35.907 00:07:09 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:35.907 00:07:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:35.907 00:07:10 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:35.907 00:07:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:35.907 00:07:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:35.907 00:07:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:35.907 00:07:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.907 00:07:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:36.166 00:07:10 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:36.166 00:07:10 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:36.166 00:07:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:36.166 00:07:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:36.166 00:07:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:36.166 00:07:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:36.166 00:07:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:36.425 00:07:10 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:36.425 00:07:10 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:36.425 00:07:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:36.425 00:07:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:36.425 00:07:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:36.425 00:07:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:36.425 00:07:10 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:36.425 00:07:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:36.425 00:07:10 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:36.425 00:07:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:36.684 [2024-11-20 00:07:10.959057] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:36.684 [2024-11-20 00:07:10.959662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ccb70 (107): Transport endpoint is not connected 00:42:36.684 [2024-11-20 00:07:10.960655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ccb70 (9): Bad file descriptor 00:42:36.684 [2024-11-20 00:07:10.961653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:36.684 [2024-11-20 00:07:10.961678] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:36.684 [2024-11-20 00:07:10.961693] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:36.684 [2024-11-20 00:07:10.961710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:36.684 request: 00:42:36.684 { 00:42:36.684 "name": "nvme0", 00:42:36.684 "trtype": "tcp", 00:42:36.684 "traddr": "127.0.0.1", 00:42:36.684 "adrfam": "ipv4", 00:42:36.684 "trsvcid": "4420", 00:42:36.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:36.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:36.684 "prchk_reftag": false, 00:42:36.684 "prchk_guard": false, 00:42:36.684 "hdgst": false, 00:42:36.684 "ddgst": false, 00:42:36.684 "psk": "key1", 00:42:36.684 "allow_unrecognized_csi": false, 00:42:36.684 "method": "bdev_nvme_attach_controller", 00:42:36.684 "req_id": 1 00:42:36.684 } 00:42:36.684 Got JSON-RPC error response 00:42:36.684 response: 00:42:36.684 { 00:42:36.684 "code": -5, 00:42:36.684 "message": "Input/output error" 00:42:36.684 } 00:42:36.684 00:07:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:36.684 00:07:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:36.684 00:07:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:36.684 00:07:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:36.684 00:07:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:36.684 00:07:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:36.684 00:07:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:36.684 00:07:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:36.684 00:07:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:36.684 00:07:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:37.254 00:07:11 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:37.254 00:07:11 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:37.254 00:07:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:37.254 00:07:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:37.254 00:07:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:37.254 00:07:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:37.254 00:07:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:37.254 00:07:11 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:37.254 00:07:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:37.254 00:07:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:37.512 00:07:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:37.512 00:07:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:38.086 00:07:12 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:38.086 00:07:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:38.086 00:07:12 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:38.086 00:07:12 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:38.086 00:07:12 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.kMvsAGbxX3 00:42:38.086 00:07:12 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.kMvsAGbxX3 00:42:38.086 00:07:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:38.086 00:07:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.kMvsAGbxX3 00:42:38.086 00:07:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:38.086 00:07:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:38.086 00:07:12 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:38.086 00:07:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:38.086 00:07:12 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kMvsAGbxX3 00:42:38.086 00:07:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kMvsAGbxX3 00:42:38.348 [2024-11-20 00:07:12.609559] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.kMvsAGbxX3': 0100660 00:42:38.348 [2024-11-20 00:07:12.609599] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:38.348 request: 00:42:38.348 { 00:42:38.348 "name": "key0", 00:42:38.348 "path": "/tmp/tmp.kMvsAGbxX3", 00:42:38.348 "method": "keyring_file_add_key", 00:42:38.348 "req_id": 1 00:42:38.348 } 00:42:38.348 Got JSON-RPC error response 00:42:38.348 response: 00:42:38.348 { 00:42:38.348 "code": -1, 00:42:38.348 "message": "Operation not permitted" 00:42:38.348 } 00:42:38.348 00:07:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:38.348 00:07:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:38.348 00:07:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:38.348 00:07:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:38.348 00:07:12 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.kMvsAGbxX3 00:42:38.348 00:07:12 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kMvsAGbxX3 00:42:38.348 00:07:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kMvsAGbxX3 00:42:38.606 00:07:12 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.kMvsAGbxX3 00:42:38.606 00:07:12 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:38.606 00:07:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:38.606 00:07:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:38.606 00:07:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:38.606 00:07:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:38.606 00:07:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:39.177 00:07:13 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:39.177 00:07:13 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:39.177 00:07:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:39.177 00:07:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:39.177 00:07:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:39.177 00:07:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:39.177 00:07:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:39.177 00:07:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:39.177 00:07:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:39.177 00:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:39.177 [2024-11-20 00:07:13.451867] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.kMvsAGbxX3': No such file or directory 00:42:39.177 [2024-11-20 00:07:13.451906] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:39.177 [2024-11-20 00:07:13.451932] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:39.177 [2024-11-20 00:07:13.451947] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:39.177 [2024-11-20 00:07:13.451961] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:39.177 [2024-11-20 00:07:13.451975] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:39.177 request: 00:42:39.177 { 00:42:39.177 "name": "nvme0", 00:42:39.177 "trtype": "tcp", 00:42:39.177 "traddr": "127.0.0.1", 00:42:39.177 "adrfam": "ipv4", 00:42:39.177 "trsvcid": "4420", 00:42:39.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:39.177 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:39.177 "prchk_reftag": false, 00:42:39.177 "prchk_guard": false, 00:42:39.177 "hdgst": false, 00:42:39.177 "ddgst": false, 00:42:39.177 "psk": "key0", 00:42:39.177 "allow_unrecognized_csi": false, 00:42:39.177 "method": "bdev_nvme_attach_controller", 00:42:39.177 "req_id": 1 00:42:39.177 } 00:42:39.177 Got JSON-RPC error response 00:42:39.177 response: 00:42:39.177 { 00:42:39.177 "code": -19, 00:42:39.177 "message": "No such device" 00:42:39.177 } 00:42:39.177 00:07:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:39.177 00:07:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:39.177 00:07:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:39.177 00:07:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:39.177 00:07:13 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:39.177 00:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:39.436 00:07:13 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:39.436 00:07:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:39.436 00:07:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:39.436 00:07:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:39.436 00:07:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:39.436 00:07:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:39.436 00:07:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.D9MhfaGGDu 00:42:39.436 00:07:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:39.436 00:07:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:39.436 00:07:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:39.436 00:07:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:39.436 00:07:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:39.436 00:07:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:39.436 00:07:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:39.694 00:07:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.D9MhfaGGDu 00:42:39.694 00:07:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.D9MhfaGGDu 00:42:39.694 00:07:13 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.D9MhfaGGDu 00:42:39.694 00:07:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.D9MhfaGGDu 00:42:39.694 00:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.D9MhfaGGDu 00:42:39.952 00:07:14 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:39.952 00:07:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:40.211 nvme0n1 00:42:40.211 00:07:14 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:40.211 00:07:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:40.211 00:07:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.211 00:07:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.211 00:07:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.211 00:07:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:40.477 00:07:14 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:40.477 00:07:14 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:40.477 00:07:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:40.741 00:07:14 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:40.741 00:07:14 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:40.741 00:07:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.741 00:07:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.741 00:07:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:40.999 00:07:15 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:40.999 00:07:15 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:40.999 00:07:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:40.999 00:07:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.999 00:07:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.999 00:07:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.999 00:07:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:41.256 00:07:15 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:41.256 00:07:15 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:41.256 00:07:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:41.514 00:07:15 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:41.514 00:07:15 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:41.514 00:07:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:41.772 00:07:16 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:41.772 00:07:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.D9MhfaGGDu 00:42:41.772 00:07:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.D9MhfaGGDu 00:42:42.031 00:07:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yfqLl5Ud1Z 00:42:42.031 00:07:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yfqLl5Ud1Z 00:42:42.290 00:07:16 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:42.290 00:07:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:42.856 nvme0n1 00:42:42.856 00:07:16 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:42.856 00:07:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:43.115 00:07:17 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:43.115 "subsystems": [ 00:42:43.115 { 00:42:43.115 "subsystem": "keyring", 00:42:43.115 "config": [ 00:42:43.115 { 00:42:43.115 "method": "keyring_file_add_key", 00:42:43.115 "params": { 00:42:43.115 "name": "key0", 00:42:43.115 "path": "/tmp/tmp.D9MhfaGGDu" 00:42:43.115 } 00:42:43.115 }, 00:42:43.115 { 00:42:43.115 "method": "keyring_file_add_key", 00:42:43.115 "params": { 00:42:43.115 "name": "key1", 00:42:43.115 "path": "/tmp/tmp.yfqLl5Ud1Z" 00:42:43.115 } 00:42:43.115 } 00:42:43.115 ] 00:42:43.115 }, 00:42:43.115 { 00:42:43.115 "subsystem": "iobuf", 00:42:43.115 "config": [ 00:42:43.115 { 00:42:43.115 "method": "iobuf_set_options", 00:42:43.115 "params": { 00:42:43.115 "small_pool_count": 8192, 00:42:43.115 "large_pool_count": 1024, 00:42:43.115 "small_bufsize": 8192, 00:42:43.115 "large_bufsize": 135168, 00:42:43.115 "enable_numa": false 00:42:43.115 } 00:42:43.115 } 00:42:43.115 ] 00:42:43.115 }, 00:42:43.115 { 00:42:43.115 "subsystem": "sock", 00:42:43.115 "config": [ 00:42:43.115 { 00:42:43.115 "method": "sock_set_default_impl", 00:42:43.115 "params": { 00:42:43.115 "impl_name": "posix" 00:42:43.115 } 00:42:43.115 }, 00:42:43.115 { 00:42:43.115 "method": "sock_impl_set_options", 00:42:43.115 "params": { 00:42:43.115 "impl_name": "ssl", 00:42:43.115 "recv_buf_size": 4096, 00:42:43.115 "send_buf_size": 4096, 00:42:43.115 "enable_recv_pipe": true, 00:42:43.115 "enable_quickack": false, 00:42:43.115 "enable_placement_id": 0, 00:42:43.115 "enable_zerocopy_send_server": true, 00:42:43.115 "enable_zerocopy_send_client": false, 00:42:43.115 "zerocopy_threshold": 0, 00:42:43.115 "tls_version": 0, 00:42:43.115 "enable_ktls": false 00:42:43.115 } 00:42:43.115 }, 00:42:43.115 { 00:42:43.115 "method": "sock_impl_set_options", 00:42:43.115 "params": { 00:42:43.115 "impl_name": "posix", 00:42:43.115 "recv_buf_size": 2097152, 00:42:43.115 "send_buf_size": 2097152, 00:42:43.115 "enable_recv_pipe": true, 00:42:43.115 "enable_quickack": false, 00:42:43.115 "enable_placement_id": 0, 00:42:43.115 "enable_zerocopy_send_server": true, 00:42:43.115 "enable_zerocopy_send_client": false, 00:42:43.115 "zerocopy_threshold": 0, 00:42:43.115 "tls_version": 0, 00:42:43.115 "enable_ktls": false 00:42:43.115 } 00:42:43.115 } 00:42:43.115 ] 00:42:43.115 }, 00:42:43.115 { 00:42:43.115 "subsystem": "vmd", 00:42:43.115 "config": [] 00:42:43.115 }, 00:42:43.115 { 00:42:43.115 "subsystem": "accel", 00:42:43.115 "config": [ 00:42:43.115 { 00:42:43.115 "method": "accel_set_options", 00:42:43.115 "params": { 00:42:43.115 "small_cache_size": 128, 00:42:43.115 "large_cache_size": 16, 00:42:43.115 "task_count": 2048, 00:42:43.115 "sequence_count": 2048, 00:42:43.115 "buf_count": 2048 00:42:43.115 } 00:42:43.115 } 00:42:43.115 ] 00:42:43.115 }, 00:42:43.115 { 00:42:43.115 "subsystem": "bdev", 00:42:43.115 "config": [ 00:42:43.115 { 00:42:43.115 "method": "bdev_set_options", 00:42:43.115 "params": { 00:42:43.115 "bdev_io_pool_size": 65535, 00:42:43.115 "bdev_io_cache_size": 256, 00:42:43.115 "bdev_auto_examine": true, 00:42:43.115 "iobuf_small_cache_size": 128, 00:42:43.115 "iobuf_large_cache_size": 16 00:42:43.115 } 00:42:43.115 }, 00:42:43.115 { 00:42:43.115 "method": "bdev_raid_set_options", 00:42:43.116 "params": { 00:42:43.116 "process_window_size_kb": 1024, 00:42:43.116 "process_max_bandwidth_mb_sec": 0 00:42:43.116 } 00:42:43.116 }, 00:42:43.116 { 00:42:43.116 "method": "bdev_iscsi_set_options", 00:42:43.116 "params": { 00:42:43.116 "timeout_sec": 30 00:42:43.116 } 00:42:43.116 }, 00:42:43.116 { 00:42:43.116 "method": "bdev_nvme_set_options", 00:42:43.116 "params": { 00:42:43.116 "action_on_timeout": "none", 00:42:43.116 "timeout_us": 0, 00:42:43.116 "timeout_admin_us": 0, 00:42:43.116 "keep_alive_timeout_ms": 10000, 00:42:43.116 "arbitration_burst": 0, 00:42:43.116 "low_priority_weight": 0, 00:42:43.116 "medium_priority_weight": 0, 00:42:43.116 "high_priority_weight": 0, 00:42:43.116 "nvme_adminq_poll_period_us": 10000, 00:42:43.116 "nvme_ioq_poll_period_us": 0, 00:42:43.116 "io_queue_requests": 512, 00:42:43.116 "delay_cmd_submit": true, 00:42:43.116 "transport_retry_count": 4, 00:42:43.116 "bdev_retry_count": 3, 00:42:43.116 "transport_ack_timeout": 0, 00:42:43.116 "ctrlr_loss_timeout_sec": 0, 00:42:43.116 "reconnect_delay_sec": 0, 00:42:43.116 "fast_io_fail_timeout_sec": 0, 00:42:43.116 "disable_auto_failback": false, 00:42:43.116 "generate_uuids": false, 00:42:43.116 "transport_tos": 0, 00:42:43.116 "nvme_error_stat": false, 00:42:43.116 "rdma_srq_size": 0, 00:42:43.116 "io_path_stat": false, 00:42:43.116 "allow_accel_sequence": false, 00:42:43.116 "rdma_max_cq_size": 0, 00:42:43.116 "rdma_cm_event_timeout_ms": 0, 00:42:43.116 "dhchap_digests": [ 00:42:43.116 "sha256", 00:42:43.116 "sha384", 00:42:43.116 "sha512" 00:42:43.116 ], 00:42:43.116 "dhchap_dhgroups": [ 00:42:43.116 "null", 00:42:43.116 "ffdhe2048", 00:42:43.116 "ffdhe3072", 00:42:43.116 "ffdhe4096", 00:42:43.116 "ffdhe6144", 00:42:43.116 "ffdhe8192" 00:42:43.116 ] 00:42:43.116 } 00:42:43.116 }, 00:42:43.116 { 00:42:43.116 "method": "bdev_nvme_attach_controller", 00:42:43.116 "params": { 00:42:43.116 "name": "nvme0", 00:42:43.116 "trtype": "TCP", 00:42:43.116 "adrfam": "IPv4", 00:42:43.116 "traddr": "127.0.0.1", 00:42:43.116 "trsvcid": "4420", 00:42:43.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:43.116 "prchk_reftag": false, 00:42:43.116 "prchk_guard": false, 00:42:43.116 "ctrlr_loss_timeout_sec": 0, 00:42:43.116 "reconnect_delay_sec": 0, 00:42:43.116 "fast_io_fail_timeout_sec": 0, 00:42:43.116 "psk": "key0", 00:42:43.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:43.116 "hdgst": false, 00:42:43.116 "ddgst": false, 00:42:43.116 "multipath": "multipath" 00:42:43.116 } 00:42:43.116 }, 00:42:43.116 { 00:42:43.116 "method": "bdev_nvme_set_hotplug", 00:42:43.116 "params": { 00:42:43.116 "period_us": 100000, 00:42:43.116 "enable": false 00:42:43.116 } 00:42:43.116 }, 00:42:43.116 { 00:42:43.116 "method": "bdev_wait_for_examine" 00:42:43.116 } 00:42:43.116 ] 00:42:43.116 }, 00:42:43.116 { 00:42:43.116 "subsystem": "nbd", 00:42:43.116 "config": [] 00:42:43.116 } 00:42:43.116 ] 00:42:43.116 }' 00:42:43.116 00:07:17 keyring_file -- keyring/file.sh@115 -- # killprocess 420141 00:42:43.116 00:07:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 420141 ']' 00:42:43.116 00:07:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 420141 00:42:43.116 00:07:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:43.116 00:07:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:43.116 00:07:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 420141 00:42:43.116 00:07:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:43.116 00:07:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:43.116 00:07:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 420141' 00:42:43.116 killing process with pid 420141 00:42:43.116 00:07:17 keyring_file -- common/autotest_common.sh@973 -- # kill 420141 00:42:43.116 Received shutdown signal, test time was about 1.000000 seconds 00:42:43.116 00:42:43.116 Latency(us) 00:42:43.116 [2024-11-19T23:07:17.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:43.116 [2024-11-19T23:07:17.428Z] =================================================================================================================== 00:42:43.116 [2024-11-19T23:07:17.428Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:43.116 00:07:17 keyring_file -- common/autotest_common.sh@978 -- # wait 420141 00:42:43.375 00:07:17 keyring_file -- keyring/file.sh@118 -- # bperfpid=421607 00:42:43.375 00:07:17 keyring_file -- keyring/file.sh@120 -- # waitforlisten 421607 /var/tmp/bperf.sock 00:42:43.375 00:07:17 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 421607 ']' 00:42:43.375 00:07:17 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:43.375 00:07:17 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:43.375 00:07:17 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:43.375 00:07:17 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:43.375 "subsystems": [ 00:42:43.375 { 00:42:43.375 "subsystem": "keyring", 00:42:43.375 "config": [ 00:42:43.375 { 00:42:43.375 "method": "keyring_file_add_key", 00:42:43.375 "params": { 00:42:43.375 "name": "key0", 00:42:43.375 "path": "/tmp/tmp.D9MhfaGGDu" 00:42:43.375 } 00:42:43.376 }, 00:42:43.376 { 00:42:43.376 "method": "keyring_file_add_key", 00:42:43.376 "params": { 00:42:43.376 "name": "key1", 00:42:43.376 "path": "/tmp/tmp.yfqLl5Ud1Z" 00:42:43.376 } 00:42:43.376 } 00:42:43.376 ] 00:42:43.376 }, 00:42:43.376 { 00:42:43.376 "subsystem": "iobuf", 00:42:43.376 "config": [ 00:42:43.376 { 00:42:43.376 "method": "iobuf_set_options", 00:42:43.376 "params": { 00:42:43.376 "small_pool_count": 8192, 00:42:43.376 "large_pool_count": 1024, 00:42:43.376 "small_bufsize": 8192, 00:42:43.376 "large_bufsize": 135168, 00:42:43.376 "enable_numa": false 00:42:43.376 } 00:42:43.376 } 00:42:43.376 ] 00:42:43.376 }, 00:42:43.376 { 00:42:43.376 "subsystem": "sock", 00:42:43.376 "config": [ 00:42:43.376 { 00:42:43.376 "method": "sock_set_default_impl", 00:42:43.376 "params": { 00:42:43.376 "impl_name": "posix" 00:42:43.376 } 00:42:43.376 }, 00:42:43.376 { 00:42:43.376 "method": "sock_impl_set_options", 00:42:43.376 "params": { 00:42:43.376 "impl_name": "ssl", 00:42:43.376 "recv_buf_size": 4096, 00:42:43.376 "send_buf_size": 4096, 00:42:43.376 "enable_recv_pipe": true, 00:42:43.376 "enable_quickack": false, 00:42:43.376 "enable_placement_id": 0, 00:42:43.376 "enable_zerocopy_send_server": true, 00:42:43.376 "enable_zerocopy_send_client": false, 00:42:43.376 "zerocopy_threshold": 0, 00:42:43.376 "tls_version": 0, 00:42:43.376 "enable_ktls": false 00:42:43.376 } 00:42:43.376 }, 00:42:43.376 { 00:42:43.376 "method": "sock_impl_set_options", 00:42:43.376 "params": { 00:42:43.376 "impl_name": "posix", 00:42:43.376 "recv_buf_size": 2097152, 00:42:43.376 "send_buf_size": 2097152, 00:42:43.376 "enable_recv_pipe": true, 00:42:43.376 "enable_quickack": false, 00:42:43.376 "enable_placement_id": 0, 00:42:43.376 "enable_zerocopy_send_server": true, 00:42:43.376 "enable_zerocopy_send_client": false, 00:42:43.376 "zerocopy_threshold": 0, 00:42:43.376 "tls_version": 0, 00:42:43.376 "enable_ktls": false 00:42:43.376 } 00:42:43.376 } 00:42:43.376 ] 00:42:43.376 }, 00:42:43.376 { 00:42:43.376 "subsystem": "vmd", 00:42:43.376 "config": [] 00:42:43.376 }, 00:42:43.376 { 00:42:43.376 "subsystem": "accel", 00:42:43.376 "config": [ 00:42:43.376 { 00:42:43.376 "method": "accel_set_options", 00:42:43.376 "params": { 00:42:43.376 "small_cache_size": 128, 00:42:43.376 "large_cache_size": 16, 00:42:43.376 "task_count": 2048, 00:42:43.376 "sequence_count": 2048, 00:42:43.376 "buf_count": 2048 00:42:43.376 } 00:42:43.376 } 00:42:43.376 ] 00:42:43.376 }, 00:42:43.376 { 00:42:43.376 "subsystem": "bdev", 00:42:43.376 "config": [ 00:42:43.376 { 00:42:43.376 "method": "bdev_set_options", 00:42:43.376 "params": { 00:42:43.376 "bdev_io_pool_size": 65535, 00:42:43.376 "bdev_io_cache_size": 256, 00:42:43.376 "bdev_auto_examine": true, 00:42:43.376 "iobuf_small_cache_size": 128, 00:42:43.376 "iobuf_large_cache_size": 16 00:42:43.376 } 00:42:43.376 }, 00:42:43.376 { 00:42:43.376 "method": "bdev_raid_set_options", 00:42:43.376 "params": { 00:42:43.376 "process_window_size_kb": 1024, 00:42:43.376 "process_max_bandwidth_mb_sec": 0 00:42:43.376 } 00:42:43.376 }, 00:42:43.376 { 00:42:43.376 "method": "bdev_iscsi_set_options", 00:42:43.376 "params": { 00:42:43.376 "timeout_sec": 30 00:42:43.376 } 00:42:43.376 }, 00:42:43.376 { 00:42:43.376 "method": "bdev_nvme_set_options", 00:42:43.376 "params": { 00:42:43.376 "action_on_timeout": "none", 00:42:43.376 "timeout_us": 0, 00:42:43.376 "timeout_admin_us": 0, 00:42:43.376 "keep_alive_timeout_ms": 10000, 00:42:43.376 "arbitration_burst": 0, 00:42:43.376 "low_priority_weight": 0, 00:42:43.376 "medium_priority_weight": 0, 00:42:43.376 "high_priority_weight": 0, 00:42:43.376 "nvme_adminq_poll_period_us": 10000, 00:42:43.376 "nvme_ioq_poll_period_us": 0, 00:42:43.376 "io_queue_requests": 512, 00:42:43.376 "delay_cmd_submit": true, 00:42:43.376 "transport_retry_count": 4, 00:42:43.376 "bdev_retry_count": 3, 00:42:43.376 "transport_ack_timeout": 0, 00:42:43.376 "ctrlr_loss_timeout_sec": 0, 00:42:43.376 "reconnect_delay_sec": 0, 00:42:43.376 "fast_io_fail_timeout_sec": 0, 00:42:43.376 "disable_auto_failback": false, 00:42:43.376 "generate_uuids": false, 00:42:43.376 "transport_tos": 0, 00:42:43.376 "nvme_error_stat": false, 00:42:43.376 "rdma_srq_size": 0, 00:42:43.376 "io_path_stat": false, 00:42:43.376 "allow_accel_sequence": false, 00:42:43.376 "rdma_max_cq_size": 0, 00:42:43.376 "rdma_cm_event_timeout_ms": 0, 00:42:43.376 "dhchap_digests": [ 00:42:43.376 "sha256", 00:42:43.376 "sha384", 00:42:43.376 "sha512" 00:42:43.376 ], 00:42:43.376 "dhchap_dhgroups": [ 00:42:43.376 "null", 00:42:43.376 "ffdhe2048", 00:42:43.376 "ffdhe3072", 00:42:43.376 "ffdhe4096", 00:42:43.376 "ffdhe6144", 00:42:43.376 "ffdhe8192" 00:42:43.376 ] 00:42:43.376 } 00:42:43.376 }, 00:42:43.376 { 00:42:43.376 "method": "bdev_nvme_attach_controller", 00:42:43.376 "params": { 00:42:43.376 "name": "nvme0", 00:42:43.376 "trtype": "TCP", 00:42:43.376 "adrfam": "IPv4", 00:42:43.376 "traddr": "127.0.0.1", 00:42:43.376 "trsvcid": "4420", 00:42:43.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:43.376 "prchk_reftag": false, 00:42:43.376 "prchk_guard": false, 00:42:43.376 "ctrlr_loss_timeout_sec": 0, 00:42:43.376 "reconnect_delay_sec": 0, 00:42:43.376 "fast_io_fail_timeout_sec": 0, 00:42:43.376 "psk": "key0", 00:42:43.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:43.377 "hdgst": false, 00:42:43.377 "ddgst": false, 00:42:43.377 "multipath": "multipath" 00:42:43.377 } 00:42:43.377 }, 00:42:43.377 { 00:42:43.377 "method": "bdev_nvme_set_hotplug", 00:42:43.377 "params": { 00:42:43.377 "period_us": 100000, 00:42:43.377 "enable": false 00:42:43.377 } 00:42:43.377 }, 00:42:43.377 { 00:42:43.377 "method": "bdev_wait_for_examine" 00:42:43.377 } 00:42:43.377 ] 00:42:43.377 }, 00:42:43.377 { 00:42:43.377 "subsystem": "nbd", 00:42:43.377 "config": [] 00:42:43.377 } 00:42:43.377 ] 00:42:43.377 }' 00:42:43.377 00:07:17 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:43.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:43.377 00:07:17 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:43.377 00:07:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:43.377 [2024-11-20 00:07:17.534300] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:42:43.377 [2024-11-20 00:07:17.534392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid421607 ] 00:42:43.377 [2024-11-20 00:07:17.604131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:43.377 [2024-11-20 00:07:17.651190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:43.635 [2024-11-20 00:07:17.833659] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:43.635 00:07:17 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:43.635 00:07:17 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:43.635 00:07:17 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:43.635 00:07:17 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:43.635 00:07:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:44.203 00:07:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:44.203 00:07:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:44.203 00:07:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:44.203 00:07:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:44.203 00:07:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:44.203 00:07:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:44.203 00:07:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:44.203 00:07:18 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:44.203 00:07:18 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:44.203 00:07:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:44.203 00:07:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:44.203 00:07:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:44.203 00:07:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:44.203 00:07:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:44.461 00:07:18 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:44.461 00:07:18 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:44.461 00:07:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:44.461 00:07:18 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:45.036 00:07:19 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:45.036 00:07:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:45.036 00:07:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.D9MhfaGGDu /tmp/tmp.yfqLl5Ud1Z 00:42:45.036 00:07:19 keyring_file -- keyring/file.sh@20 -- # killprocess 421607 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 421607 ']' 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 421607 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 421607 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 421607' 00:42:45.036 killing process with pid 421607 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@973 -- # kill 421607 00:42:45.036 Received shutdown signal, test time was about 1.000000 seconds 00:42:45.036 00:42:45.036 Latency(us) 00:42:45.036 [2024-11-19T23:07:19.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:45.036 [2024-11-19T23:07:19.348Z] =================================================================================================================== 00:42:45.036 [2024-11-19T23:07:19.348Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@978 -- # wait 421607 00:42:45.036 00:07:19 keyring_file -- keyring/file.sh@21 -- # killprocess 420133 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 420133 ']' 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 420133 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 420133 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 420133' 00:42:45.036 killing process with pid 420133 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@973 -- # kill 420133 00:42:45.036 00:07:19 keyring_file -- common/autotest_common.sh@978 -- # wait 420133 00:42:45.608 00:42:45.608 real 0m14.567s 00:42:45.608 user 0m37.226s 00:42:45.608 sys 0m3.243s 00:42:45.608 00:07:19 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:45.608 00:07:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:45.608 ************************************ 00:42:45.608 END TEST keyring_file 00:42:45.608 ************************************ 00:42:45.608 00:07:19 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:42:45.608 00:07:19 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:45.608 00:07:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:45.608 00:07:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:45.608 00:07:19 -- common/autotest_common.sh@10 -- # set +x 00:42:45.608 ************************************ 00:42:45.608 START TEST keyring_linux 00:42:45.608 ************************************ 00:42:45.608 00:07:19 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:45.608 Joined session keyring: 175083067 00:42:45.608 * Looking for test storage... 00:42:45.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:45.608 00:07:19 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:45.608 00:07:19 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:42:45.608 00:07:19 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:45.608 00:07:19 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:45.608 00:07:19 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:45.609 00:07:19 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:45.609 00:07:19 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.609 --rc genhtml_branch_coverage=1 00:42:45.609 --rc genhtml_function_coverage=1 00:42:45.609 --rc genhtml_legend=1 00:42:45.609 --rc geninfo_all_blocks=1 00:42:45.609 --rc geninfo_unexecuted_blocks=1 00:42:45.609 00:42:45.609 ' 00:42:45.609 00:07:19 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.609 --rc genhtml_branch_coverage=1 00:42:45.609 --rc genhtml_function_coverage=1 00:42:45.609 --rc genhtml_legend=1 00:42:45.609 --rc geninfo_all_blocks=1 00:42:45.609 --rc geninfo_unexecuted_blocks=1 00:42:45.609 00:42:45.609 ' 00:42:45.609 00:07:19 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.609 --rc genhtml_branch_coverage=1 00:42:45.609 --rc genhtml_function_coverage=1 00:42:45.609 --rc genhtml_legend=1 00:42:45.609 --rc geninfo_all_blocks=1 00:42:45.609 --rc geninfo_unexecuted_blocks=1 00:42:45.609 00:42:45.609 ' 00:42:45.609 00:07:19 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.609 --rc genhtml_branch_coverage=1 00:42:45.609 --rc genhtml_function_coverage=1 00:42:45.609 --rc genhtml_legend=1 00:42:45.609 --rc geninfo_all_blocks=1 00:42:45.609 --rc geninfo_unexecuted_blocks=1 00:42:45.609 00:42:45.609 ' 00:42:45.609 00:07:19 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:45.609 00:07:19 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:45.609 00:07:19 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.609 00:07:19 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.609 00:07:19 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.609 00:07:19 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:45.609 00:07:19 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:45.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:45.609 00:07:19 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:45.609 00:07:19 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:45.609 00:07:19 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:45.609 00:07:19 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:45.609 00:07:19 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:45.609 00:07:19 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:45.609 /tmp/:spdk-test:key0 00:42:45.609 00:07:19 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:45.609 00:07:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:45.609 00:07:19 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:45.868 00:07:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:45.868 00:07:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:45.868 /tmp/:spdk-test:key1 00:42:45.868 00:07:19 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=422084 00:42:45.868 00:07:19 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:45.868 00:07:19 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 422084 00:42:45.868 00:07:19 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 422084 ']' 00:42:45.868 00:07:19 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:45.868 00:07:19 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:45.868 00:07:19 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:45.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:45.868 00:07:19 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:45.868 00:07:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:45.868 [2024-11-20 00:07:20.019504] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:42:45.868 [2024-11-20 00:07:20.019625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422084 ] 00:42:45.868 [2024-11-20 00:07:20.091899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:45.868 [2024-11-20 00:07:20.143331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:46.127 00:07:20 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:46.127 00:07:20 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:46.127 00:07:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:46.127 00:07:20 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.127 00:07:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:46.127 [2024-11-20 00:07:20.427668] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:46.386 null0 00:42:46.386 [2024-11-20 00:07:20.459756] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:46.386 [2024-11-20 00:07:20.460323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:46.386 00:07:20 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.386 00:07:20 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:46.386 334437894 00:42:46.386 00:07:20 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:46.386 502287024 00:42:46.386 00:07:20 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=422101 00:42:46.386 00:07:20 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:46.386 00:07:20 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 422101 /var/tmp/bperf.sock 00:42:46.386 00:07:20 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 422101 ']' 00:42:46.386 00:07:20 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:46.386 00:07:20 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:46.386 00:07:20 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:46.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:46.386 00:07:20 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:46.386 00:07:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:46.386 [2024-11-20 00:07:20.525335] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 22.11.4 initialization... 00:42:46.386 [2024-11-20 00:07:20.525413] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422101 ] 00:42:46.386 [2024-11-20 00:07:20.595432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:46.386 [2024-11-20 00:07:20.644326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:46.649 00:07:20 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:46.649 00:07:20 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:46.649 00:07:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:46.649 00:07:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:46.905 00:07:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:46.905 00:07:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:47.163 00:07:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:47.163 00:07:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:47.424 [2024-11-20 00:07:21.631868] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:47.424 nvme0n1 00:42:47.424 00:07:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:47.424 00:07:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:47.424 00:07:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:47.424 00:07:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:47.424 00:07:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:47.424 00:07:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:47.685 00:07:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:47.685 00:07:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:47.685 00:07:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:47.685 00:07:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:47.685 00:07:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:47.685 00:07:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:47.685 00:07:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:48.253 00:07:22 keyring_linux -- keyring/linux.sh@25 -- # sn=334437894 00:42:48.253 00:07:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:48.253 00:07:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:48.253 00:07:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 334437894 == \3\3\4\4\3\7\8\9\4 ]] 00:42:48.253 00:07:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 334437894 00:42:48.253 00:07:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:48.253 00:07:22 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:48.253 Running I/O for 1 seconds... 00:42:49.192 9154.00 IOPS, 35.76 MiB/s 00:42:49.192 Latency(us) 00:42:49.192 [2024-11-19T23:07:23.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:49.192 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:49.192 nvme0n1 : 1.01 9162.80 35.79 0.00 0.00 13869.85 4102.07 16214.09 00:42:49.192 [2024-11-19T23:07:23.504Z] =================================================================================================================== 00:42:49.192 [2024-11-19T23:07:23.504Z] Total : 9162.80 35.79 0.00 0.00 13869.85 4102.07 16214.09 00:42:49.192 { 00:42:49.192 "results": [ 00:42:49.192 { 00:42:49.192 "job": "nvme0n1", 00:42:49.192 "core_mask": "0x2", 00:42:49.192 "workload": "randread", 00:42:49.192 "status": "finished", 00:42:49.192 "queue_depth": 128, 00:42:49.192 "io_size": 4096, 00:42:49.192 "runtime": 1.013118, 00:42:49.192 "iops": 9162.80235865911, 00:42:49.192 "mibps": 35.79219671351215, 00:42:49.192 "io_failed": 0, 00:42:49.192 "io_timeout": 0, 00:42:49.192 "avg_latency_us": 13869.854376897636, 00:42:49.192 "min_latency_us": 4102.068148148148, 00:42:49.192 "max_latency_us": 16214.091851851852 00:42:49.192 } 00:42:49.192 ], 00:42:49.192 "core_count": 1 00:42:49.192 } 00:42:49.192 00:07:23 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:49.192 00:07:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:49.452 00:07:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:49.452 00:07:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:49.452 00:07:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:49.452 00:07:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:49.453 00:07:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:49.453 00:07:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:49.714 00:07:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:49.714 00:07:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:49.714 00:07:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:49.714 00:07:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:49.714 00:07:23 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:42:49.714 00:07:23 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:49.714 00:07:23 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:49.714 00:07:23 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:49.714 00:07:23 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:49.714 00:07:23 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:49.714 00:07:23 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:49.714 00:07:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:49.972 [2024-11-20 00:07:24.237226] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:49.972 [2024-11-20 00:07:24.237844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b8900 (107): Transport endpoint is not connected 00:42:49.972 [2024-11-20 00:07:24.238832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b8900 (9): Bad file descriptor 00:42:49.972 [2024-11-20 00:07:24.239831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:49.972 [2024-11-20 00:07:24.239853] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:49.972 [2024-11-20 00:07:24.239869] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:49.972 [2024-11-20 00:07:24.239885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:49.972 request: 00:42:49.972 { 00:42:49.972 "name": "nvme0", 00:42:49.972 "trtype": "tcp", 00:42:49.972 "traddr": "127.0.0.1", 00:42:49.972 "adrfam": "ipv4", 00:42:49.972 "trsvcid": "4420", 00:42:49.972 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:49.972 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:49.972 "prchk_reftag": false, 00:42:49.972 "prchk_guard": false, 00:42:49.972 "hdgst": false, 00:42:49.972 "ddgst": false, 00:42:49.972 "psk": ":spdk-test:key1", 00:42:49.972 "allow_unrecognized_csi": false, 00:42:49.972 "method": "bdev_nvme_attach_controller", 00:42:49.972 "req_id": 1 00:42:49.972 } 00:42:49.972 Got JSON-RPC error response 00:42:49.972 response: 00:42:49.972 { 00:42:49.972 "code": -5, 00:42:49.972 "message": "Input/output error" 00:42:49.972 } 00:42:49.973 00:07:24 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:42:49.973 00:07:24 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:49.973 00:07:24 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:49.973 00:07:24 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@33 -- # sn=334437894 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 334437894 00:42:49.973 1 links removed 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@33 -- # sn=502287024 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 502287024 00:42:49.973 1 links removed 00:42:49.973 00:07:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 422101 00:42:49.973 00:07:24 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 422101 ']' 00:42:49.973 00:07:24 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 422101 00:42:49.973 00:07:24 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:49.973 00:07:24 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:49.973 00:07:24 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 422101 00:42:50.231 00:07:24 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 422101' 00:42:50.232 killing process with pid 422101 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@973 -- # kill 422101 00:42:50.232 Received shutdown signal, test time was about 1.000000 seconds 00:42:50.232 00:42:50.232 Latency(us) 00:42:50.232 [2024-11-19T23:07:24.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:50.232 [2024-11-19T23:07:24.544Z] =================================================================================================================== 00:42:50.232 [2024-11-19T23:07:24.544Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@978 -- # wait 422101 00:42:50.232 00:07:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 422084 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 422084 ']' 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 422084 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 422084 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 422084' 00:42:50.232 killing process with pid 422084 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@973 -- # kill 422084 00:42:50.232 00:07:24 keyring_linux -- common/autotest_common.sh@978 -- # wait 422084 00:42:50.799 00:42:50.799 real 0m5.186s 00:42:50.800 user 0m10.217s 00:42:50.800 sys 0m1.627s 00:42:50.800 00:07:24 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:50.800 00:07:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:50.800 ************************************ 00:42:50.800 END TEST keyring_linux 00:42:50.800 ************************************ 00:42:50.800 00:07:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:50.800 00:07:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:50.800 00:07:24 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:42:50.800 00:07:24 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:42:50.800 00:07:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:42:50.800 00:07:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:50.800 00:07:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:50.800 00:07:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:50.800 00:07:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:42:50.800 00:07:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:50.800 00:07:24 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:42:50.800 00:07:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:50.800 00:07:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:50.800 00:07:24 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:42:50.800 00:07:24 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:42:50.800 00:07:24 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:42:50.800 00:07:24 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:42:50.800 00:07:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:50.800 00:07:24 -- common/autotest_common.sh@10 -- # set +x 00:42:50.800 00:07:24 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:42:50.800 00:07:24 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:42:50.800 00:07:24 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:42:50.800 00:07:24 -- common/autotest_common.sh@10 -- # set +x 00:42:52.701 INFO: APP EXITING 00:42:52.701 INFO: killing all VMs 00:42:52.701 INFO: killing vhost app 00:42:52.701 INFO: EXIT DONE 00:42:53.638 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:42:53.638 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:42:53.638 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:42:53.638 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:42:53.638 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:42:53.638 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:42:53.896 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:42:53.896 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:42:53.896 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:42:53.896 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:42:53.896 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:42:53.896 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:42:53.896 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:42:53.896 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:42:53.896 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:42:53.896 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:42:53.896 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:42:55.273 Cleaning 00:42:55.273 Removing: /var/run/dpdk/spdk0/config 00:42:55.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:55.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:55.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:55.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:55.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:55.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:55.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:55.273 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:55.273 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:55.273 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:55.273 Removing: /var/run/dpdk/spdk1/config 00:42:55.273 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:55.273 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:55.273 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:55.273 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:55.273 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:55.273 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:55.273 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:55.273 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:55.273 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:55.273 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:55.273 Removing: /var/run/dpdk/spdk2/config 00:42:55.273 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:55.273 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:55.273 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:55.273 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:55.273 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:55.273 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:55.273 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:55.273 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:55.273 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:55.273 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:55.273 Removing: /var/run/dpdk/spdk3/config 00:42:55.273 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:55.273 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:55.273 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:55.273 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:55.273 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:55.273 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:55.273 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:55.273 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:55.273 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:55.273 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:55.273 Removing: /var/run/dpdk/spdk4/config 00:42:55.273 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:55.273 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:55.273 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:55.273 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:55.273 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:55.273 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:55.273 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:55.273 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:55.273 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:55.273 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:55.273 Removing: /dev/shm/bdev_svc_trace.1 00:42:55.273 Removing: /dev/shm/nvmf_trace.0 00:42:55.273 Removing: /dev/shm/spdk_tgt_trace.pid38248 00:42:55.273 Removing: /var/run/dpdk/spdk0 00:42:55.273 Removing: /var/run/dpdk/spdk1 00:42:55.273 Removing: /var/run/dpdk/spdk2 00:42:55.273 Removing: /var/run/dpdk/spdk3 00:42:55.273 Removing: /var/run/dpdk/spdk4 00:42:55.273 Removing: /var/run/dpdk/spdk_pid155004 00:42:55.273 Removing: /var/run/dpdk/spdk_pid158292 00:42:55.273 Removing: /var/run/dpdk/spdk_pid162125 00:42:55.273 Removing: /var/run/dpdk/spdk_pid167012 00:42:55.273 Removing: /var/run/dpdk/spdk_pid167014 00:42:55.273 Removing: /var/run/dpdk/spdk_pid167669 00:42:55.273 Removing: /var/run/dpdk/spdk_pid168316 00:42:55.273 Removing: /var/run/dpdk/spdk_pid168858 00:42:55.273 Removing: /var/run/dpdk/spdk_pid169259 00:42:55.273 Removing: /var/run/dpdk/spdk_pid169376 00:42:55.273 Removing: /var/run/dpdk/spdk_pid169523 00:42:55.273 Removing: /var/run/dpdk/spdk_pid169658 00:42:55.273 Removing: /var/run/dpdk/spdk_pid169662 00:42:55.273 Removing: /var/run/dpdk/spdk_pid170329 00:42:55.273 Removing: /var/run/dpdk/spdk_pid170865 00:42:55.273 Removing: /var/run/dpdk/spdk_pid171527 00:42:55.273 Removing: /var/run/dpdk/spdk_pid171926 00:42:55.273 Removing: /var/run/dpdk/spdk_pid171928 00:42:55.273 Removing: /var/run/dpdk/spdk_pid172184 00:42:55.273 Removing: /var/run/dpdk/spdk_pid173083 00:42:55.273 Removing: /var/run/dpdk/spdk_pid173812 00:42:55.273 Removing: /var/run/dpdk/spdk_pid179139 00:42:55.273 Removing: /var/run/dpdk/spdk_pid208039 00:42:55.273 Removing: /var/run/dpdk/spdk_pid210974 00:42:55.273 Removing: /var/run/dpdk/spdk_pid212148 00:42:55.273 Removing: /var/run/dpdk/spdk_pid213491 00:42:55.273 Removing: /var/run/dpdk/spdk_pid213857 00:42:55.273 Removing: /var/run/dpdk/spdk_pid214240 00:42:55.273 Removing: /var/run/dpdk/spdk_pid214387 00:42:55.273 Removing: /var/run/dpdk/spdk_pid214865 00:42:55.273 Removing: /var/run/dpdk/spdk_pid216153 00:42:55.273 Removing: /var/run/dpdk/spdk_pid216975 00:42:55.273 Removing: /var/run/dpdk/spdk_pid217316 00:42:55.273 Removing: /var/run/dpdk/spdk_pid218925 00:42:55.273 Removing: /var/run/dpdk/spdk_pid219345 00:42:55.273 Removing: /var/run/dpdk/spdk_pid219790 00:42:55.273 Removing: /var/run/dpdk/spdk_pid222231 00:42:55.273 Removing: /var/run/dpdk/spdk_pid225587 00:42:55.532 Removing: /var/run/dpdk/spdk_pid225588 00:42:55.532 Removing: /var/run/dpdk/spdk_pid225589 00:42:55.532 Removing: /var/run/dpdk/spdk_pid227780 00:42:55.532 Removing: /var/run/dpdk/spdk_pid229891 00:42:55.532 Removing: /var/run/dpdk/spdk_pid233410 00:42:55.532 Removing: /var/run/dpdk/spdk_pid256340 00:42:55.532 Removing: /var/run/dpdk/spdk_pid259114 00:42:55.532 Removing: /var/run/dpdk/spdk_pid262915 00:42:55.532 Removing: /var/run/dpdk/spdk_pid263847 00:42:55.532 Removing: /var/run/dpdk/spdk_pid264933 00:42:55.532 Removing: /var/run/dpdk/spdk_pid266011 00:42:55.532 Removing: /var/run/dpdk/spdk_pid268766 00:42:55.532 Removing: /var/run/dpdk/spdk_pid271231 00:42:55.532 Removing: /var/run/dpdk/spdk_pid273585 00:42:55.532 Removing: /var/run/dpdk/spdk_pid278435 00:42:55.532 Removing: /var/run/dpdk/spdk_pid278443 00:42:55.532 Removing: /var/run/dpdk/spdk_pid281214 00:42:55.532 Removing: /var/run/dpdk/spdk_pid281346 00:42:55.532 Removing: /var/run/dpdk/spdk_pid281499 00:42:55.532 Removing: /var/run/dpdk/spdk_pid281872 00:42:55.532 Removing: /var/run/dpdk/spdk_pid281883 00:42:55.532 Removing: /var/run/dpdk/spdk_pid282952 00:42:55.532 Removing: /var/run/dpdk/spdk_pid284132 00:42:55.532 Removing: /var/run/dpdk/spdk_pid285311 00:42:55.532 Removing: /var/run/dpdk/spdk_pid286485 00:42:55.532 Removing: /var/run/dpdk/spdk_pid287802 00:42:55.532 Removing: /var/run/dpdk/spdk_pid288980 00:42:55.532 Removing: /var/run/dpdk/spdk_pid292705 00:42:55.532 Removing: /var/run/dpdk/spdk_pid293124 00:42:55.532 Removing: /var/run/dpdk/spdk_pid294407 00:42:55.532 Removing: /var/run/dpdk/spdk_pid295144 00:42:55.532 Removing: /var/run/dpdk/spdk_pid298864 00:42:55.532 Removing: /var/run/dpdk/spdk_pid300831 00:42:55.532 Removing: /var/run/dpdk/spdk_pid304352 00:42:55.532 Removing: /var/run/dpdk/spdk_pid308322 00:42:55.532 Removing: /var/run/dpdk/spdk_pid314815 00:42:55.532 Removing: /var/run/dpdk/spdk_pid319239 00:42:55.532 Removing: /var/run/dpdk/spdk_pid319293 00:42:55.532 Removing: /var/run/dpdk/spdk_pid332050 00:42:55.532 Removing: /var/run/dpdk/spdk_pid332460 00:42:55.532 Removing: /var/run/dpdk/spdk_pid332975 00:42:55.532 Removing: /var/run/dpdk/spdk_pid333384 00:42:55.532 Removing: /var/run/dpdk/spdk_pid333968 00:42:55.532 Removing: /var/run/dpdk/spdk_pid334371 00:42:55.532 Removing: /var/run/dpdk/spdk_pid334785 00:42:55.532 Removing: /var/run/dpdk/spdk_pid335204 00:42:55.532 Removing: /var/run/dpdk/spdk_pid337690 00:42:55.532 Removing: /var/run/dpdk/spdk_pid337948 00:42:55.532 Removing: /var/run/dpdk/spdk_pid342367 00:42:55.532 Removing: /var/run/dpdk/spdk_pid342425 00:42:55.532 Removing: /var/run/dpdk/spdk_pid345785 00:42:55.532 Removing: /var/run/dpdk/spdk_pid348398 00:42:55.532 Removing: /var/run/dpdk/spdk_pid355299 00:42:55.532 Removing: /var/run/dpdk/spdk_pid355702 00:42:55.532 Removing: /var/run/dpdk/spdk_pid358082 00:42:55.532 Removing: /var/run/dpdk/spdk_pid358356 00:42:55.532 Removing: /var/run/dpdk/spdk_pid36054 00:42:55.532 Removing: /var/run/dpdk/spdk_pid360855 00:42:55.532 Removing: /var/run/dpdk/spdk_pid364546 00:42:55.532 Removing: /var/run/dpdk/spdk_pid366694 00:42:55.532 Removing: /var/run/dpdk/spdk_pid37296 00:42:55.532 Removing: /var/run/dpdk/spdk_pid373036 00:42:55.532 Removing: /var/run/dpdk/spdk_pid378769 00:42:55.532 Removing: /var/run/dpdk/spdk_pid380017 00:42:55.532 Removing: /var/run/dpdk/spdk_pid380614 00:42:55.532 Removing: /var/run/dpdk/spdk_pid38248 00:42:55.532 Removing: /var/run/dpdk/spdk_pid38636 00:42:55.532 Removing: /var/run/dpdk/spdk_pid390777 00:42:55.532 Removing: /var/run/dpdk/spdk_pid39260 00:42:55.532 Removing: /var/run/dpdk/spdk_pid393022 00:42:55.532 Removing: /var/run/dpdk/spdk_pid39402 00:42:55.532 Removing: /var/run/dpdk/spdk_pid395022 00:42:55.532 Removing: /var/run/dpdk/spdk_pid399944 00:42:55.532 Removing: /var/run/dpdk/spdk_pid399994 00:42:55.532 Removing: /var/run/dpdk/spdk_pid40118 00:42:55.532 Removing: /var/run/dpdk/spdk_pid40240 00:42:55.532 Removing: /var/run/dpdk/spdk_pid402880 00:42:55.532 Removing: /var/run/dpdk/spdk_pid404244 00:42:55.532 Removing: /var/run/dpdk/spdk_pid40498 00:42:55.533 Removing: /var/run/dpdk/spdk_pid405640 00:42:55.533 Removing: /var/run/dpdk/spdk_pid406508 00:42:55.533 Removing: /var/run/dpdk/spdk_pid408018 00:42:55.533 Removing: /var/run/dpdk/spdk_pid409392 00:42:55.533 Removing: /var/run/dpdk/spdk_pid414686 00:42:55.533 Removing: /var/run/dpdk/spdk_pid415061 00:42:55.533 Removing: /var/run/dpdk/spdk_pid415452 00:42:55.533 Removing: /var/run/dpdk/spdk_pid416998 00:42:55.533 Removing: /var/run/dpdk/spdk_pid41706 00:42:55.533 Removing: /var/run/dpdk/spdk_pid417278 00:42:55.533 Removing: /var/run/dpdk/spdk_pid417678 00:42:55.533 Removing: /var/run/dpdk/spdk_pid420133 00:42:55.533 Removing: /var/run/dpdk/spdk_pid420141 00:42:55.533 Removing: /var/run/dpdk/spdk_pid421607 00:42:55.533 Removing: /var/run/dpdk/spdk_pid422084 00:42:55.533 Removing: /var/run/dpdk/spdk_pid422101 00:42:55.533 Removing: /var/run/dpdk/spdk_pid42632 00:42:55.533 Removing: /var/run/dpdk/spdk_pid42946 00:42:55.533 Removing: /var/run/dpdk/spdk_pid43147 00:42:55.533 Removing: /var/run/dpdk/spdk_pid43371 00:42:55.533 Removing: /var/run/dpdk/spdk_pid43659 00:42:55.791 Removing: /var/run/dpdk/spdk_pid43835 00:42:55.791 Removing: /var/run/dpdk/spdk_pid43987 00:42:55.791 Removing: /var/run/dpdk/spdk_pid44175 00:42:55.791 Removing: /var/run/dpdk/spdk_pid44521 00:42:55.792 Removing: /var/run/dpdk/spdk_pid46999 00:42:55.792 Removing: /var/run/dpdk/spdk_pid47191 00:42:55.792 Removing: /var/run/dpdk/spdk_pid47345 00:42:55.792 Removing: /var/run/dpdk/spdk_pid47452 00:42:55.792 Removing: /var/run/dpdk/spdk_pid47752 00:42:55.792 Removing: /var/run/dpdk/spdk_pid47885 00:42:55.792 Removing: /var/run/dpdk/spdk_pid48186 00:42:55.792 Removing: /var/run/dpdk/spdk_pid48199 00:42:55.792 Removing: /var/run/dpdk/spdk_pid48484 00:42:55.792 Removing: /var/run/dpdk/spdk_pid48495 00:42:55.792 Removing: /var/run/dpdk/spdk_pid48658 00:42:55.792 Removing: /var/run/dpdk/spdk_pid48788 00:42:55.792 Removing: /var/run/dpdk/spdk_pid49165 00:42:55.792 Removing: /var/run/dpdk/spdk_pid49322 00:42:55.792 Removing: /var/run/dpdk/spdk_pid49648 00:42:55.792 Removing: /var/run/dpdk/spdk_pid51763 00:42:55.792 Removing: /var/run/dpdk/spdk_pid54391 00:42:55.792 Removing: /var/run/dpdk/spdk_pid61269 00:42:55.792 Removing: /var/run/dpdk/spdk_pid61780 00:42:55.792 Removing: /var/run/dpdk/spdk_pid64213 00:42:55.792 Removing: /var/run/dpdk/spdk_pid64488 00:42:55.792 Removing: /var/run/dpdk/spdk_pid67133 00:42:55.792 Removing: /var/run/dpdk/spdk_pid71479 00:42:55.792 Removing: /var/run/dpdk/spdk_pid73675 00:42:55.792 Removing: /var/run/dpdk/spdk_pid79979 00:42:55.792 Removing: /var/run/dpdk/spdk_pid85209 00:42:55.792 Removing: /var/run/dpdk/spdk_pid86542 00:42:55.792 Removing: /var/run/dpdk/spdk_pid87216 00:42:55.792 Removing: /var/run/dpdk/spdk_pid97468 00:42:55.792 Removing: /var/run/dpdk/spdk_pid99752 00:42:55.792 Clean 00:42:55.792 00:07:30 -- common/autotest_common.sh@1453 -- # return 0 00:42:55.792 00:07:30 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:42:55.792 00:07:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:55.792 00:07:30 -- common/autotest_common.sh@10 -- # set +x 00:42:55.792 00:07:30 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:42:55.792 00:07:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:55.792 00:07:30 -- common/autotest_common.sh@10 -- # set +x 00:42:55.792 00:07:30 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:55.792 00:07:30 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:55.792 00:07:30 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:55.792 00:07:30 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:42:55.792 00:07:30 -- spdk/autotest.sh@398 -- # hostname 00:42:55.792 00:07:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:56.051 geninfo: WARNING: invalid characters removed from testname! 00:43:28.174 00:08:00 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:30.703 00:08:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:33.984 00:08:07 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:36.513 00:08:10 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:39.797 00:08:13 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:42.334 00:08:16 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:45.615 00:08:19 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:45.615 00:08:19 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:45.615 00:08:19 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:43:45.615 00:08:19 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:45.615 00:08:19 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:45.615 00:08:19 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:45.615 + [[ -n 4138715 ]] 00:43:45.615 + sudo kill 4138715 00:43:45.624 [Pipeline] } 00:43:45.637 [Pipeline] // stage 00:43:45.642 [Pipeline] } 00:43:45.654 [Pipeline] // timeout 00:43:45.658 [Pipeline] } 00:43:45.670 [Pipeline] // catchError 00:43:45.675 [Pipeline] } 00:43:45.690 [Pipeline] // wrap 00:43:45.695 [Pipeline] } 00:43:45.705 [Pipeline] // catchError 00:43:45.714 [Pipeline] stage 00:43:45.716 [Pipeline] { (Epilogue) 00:43:45.737 [Pipeline] catchError 00:43:45.739 [Pipeline] { 00:43:45.752 [Pipeline] echo 00:43:45.754 Cleanup processes 00:43:45.760 [Pipeline] sh 00:43:46.046 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:46.046 434377 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:46.062 [Pipeline] sh 00:43:46.349 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:46.349 ++ grep -v 'sudo pgrep' 00:43:46.349 ++ awk '{print $1}' 00:43:46.349 + sudo kill -9 00:43:46.349 + true 00:43:46.364 [Pipeline] sh 00:43:46.677 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:58.896 [Pipeline] sh 00:43:59.188 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:59.188 Artifacts sizes are good 00:43:59.203 [Pipeline] archiveArtifacts 00:43:59.210 Archiving artifacts 00:43:59.412 [Pipeline] sh 00:43:59.748 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:59.760 [Pipeline] cleanWs 00:43:59.768 [WS-CLEANUP] Deleting project workspace... 00:43:59.768 [WS-CLEANUP] Deferred wipeout is used... 00:43:59.775 [WS-CLEANUP] done 00:43:59.776 [Pipeline] } 00:43:59.786 [Pipeline] // catchError 00:43:59.795 [Pipeline] sh 00:44:00.073 + logger -p user.info -t JENKINS-CI 00:44:00.081 [Pipeline] } 00:44:00.095 [Pipeline] // stage 00:44:00.100 [Pipeline] } 00:44:00.114 [Pipeline] // node 00:44:00.120 [Pipeline] End of Pipeline 00:44:00.161 Finished: SUCCESS